00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 632 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3292 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.101 Using shallow fetch with depth 1 00:00:00.101 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.102 > git --version # timeout=10 00:00:00.141 > git --version # 'git version 2.39.2' 00:00:00.141 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.170 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.170 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.245 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.255 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.266 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:03.266 > git config core.sparsecheckout # timeout=10 00:00:03.276 > git read-tree -mu HEAD # timeout=10 00:00:03.292 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:03.313 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:03.313 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:03.431 [Pipeline] Start of Pipeline 00:00:03.446 [Pipeline] library 00:00:03.447 Loading library shm_lib@master 00:00:03.447 Library shm_lib@master is cached. Copying from home. 00:00:03.462 [Pipeline] node 00:00:03.471 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:03.472 [Pipeline] { 00:00:03.480 [Pipeline] catchError 00:00:03.481 [Pipeline] { 00:00:03.491 [Pipeline] wrap 00:00:03.499 [Pipeline] { 00:00:03.506 [Pipeline] stage 00:00:03.508 [Pipeline] { (Prologue) 00:00:03.522 [Pipeline] echo 00:00:03.523 Node: VM-host-WFP1 00:00:03.528 [Pipeline] cleanWs 00:00:03.535 [WS-CLEANUP] Deleting project workspace... 00:00:03.535 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.540 [WS-CLEANUP] done 00:00:03.704 [Pipeline] setCustomBuildProperty 00:00:03.766 [Pipeline] httpRequest 00:00:03.785 [Pipeline] echo 00:00:03.786 Sorcerer 10.211.164.101 is alive 00:00:03.793 [Pipeline] httpRequest 00:00:03.796 HttpMethod: GET 00:00:03.797 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.797 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.798 Response Code: HTTP/1.1 200 OK 00:00:03.799 Success: Status code 200 is in the accepted range: 200,404 00:00:03.800 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.942 [Pipeline] sh 00:00:04.222 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.236 [Pipeline] httpRequest 00:00:04.249 [Pipeline] echo 00:00:04.250 Sorcerer 10.211.164.101 is alive 00:00:04.256 [Pipeline] httpRequest 00:00:04.260 HttpMethod: GET 00:00:04.261 URL: http://10.211.164.101/packages/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:04.261 Sending request to url: http://10.211.164.101/packages/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:04.262 Response Code: HTTP/1.1 200 OK 00:00:04.263 Success: Status code 200 is in the accepted range: 200,404 00:00:04.264 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:22.325 [Pipeline] sh 00:00:22.606 + tar --no-same-owner -xf spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:25.152 [Pipeline] sh 00:00:25.434 + git -C spdk log --oneline -n5 00:00:25.434 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:25.434 50222f810 configure: don't exit on non Intel platforms 00:00:25.434 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:25.434 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:25.434 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:25.456 [Pipeline] withCredentials 00:00:25.468 > git --version # timeout=10 00:00:25.480 > git --version # 'git version 2.39.2' 00:00:25.496 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:25.499 [Pipeline] { 00:00:25.509 [Pipeline] retry 00:00:25.511 [Pipeline] { 00:00:25.528 [Pipeline] sh 00:00:25.808 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:27.736 [Pipeline] } 00:00:27.760 [Pipeline] // retry 00:00:27.767 [Pipeline] } 00:00:27.788 [Pipeline] // withCredentials 00:00:27.797 [Pipeline] httpRequest 00:00:27.832 [Pipeline] echo 00:00:27.833 Sorcerer 10.211.164.101 is alive 00:00:27.843 [Pipeline] httpRequest 00:00:27.848 HttpMethod: GET 00:00:27.848 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.849 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.854 Response Code: HTTP/1.1 200 OK 00:00:27.855 Success: Status code 200 is in the accepted range: 200,404 00:00:27.855 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.748 [Pipeline] sh 00:01:26.062 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.452 [Pipeline] sh 00:01:27.733 + git -C dpdk log --oneline -n5 00:01:27.733 eeb0605f11 version: 23.11.0 00:01:27.733 238778122a doc: update release notes for 23.11 00:01:27.733 46aa6b3cfc doc: fix description of RSS features 00:01:27.733 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:27.733 7e421ae345 devtools: support skipping forbid rule check 00:01:27.751 [Pipeline] writeFile 00:01:27.769 [Pipeline] sh 00:01:28.052 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:28.064 [Pipeline] sh 00:01:28.344 + cat autorun-spdk.conf 00:01:28.344 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.344 SPDK_TEST_NVME=1 00:01:28.344 SPDK_TEST_FTL=1 00:01:28.344 SPDK_TEST_ISAL=1 00:01:28.344 SPDK_RUN_ASAN=1 00:01:28.344 SPDK_RUN_UBSAN=1 00:01:28.344 SPDK_TEST_XNVME=1 00:01:28.344 SPDK_TEST_NVME_FDP=1 00:01:28.344 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.344 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:28.344 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.351 RUN_NIGHTLY=1 00:01:28.353 [Pipeline] } 00:01:28.370 [Pipeline] // stage 00:01:28.387 [Pipeline] stage 00:01:28.389 [Pipeline] { (Run VM) 00:01:28.404 [Pipeline] sh 00:01:28.686 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:28.687 + echo 'Start stage prepare_nvme.sh' 00:01:28.687 Start stage prepare_nvme.sh 00:01:28.687 + [[ -n 0 ]] 00:01:28.687 + disk_prefix=ex0 00:01:28.687 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:28.687 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:28.687 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:28.687 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.687 ++ SPDK_TEST_NVME=1 00:01:28.687 ++ SPDK_TEST_FTL=1 00:01:28.687 ++ SPDK_TEST_ISAL=1 00:01:28.687 ++ SPDK_RUN_ASAN=1 00:01:28.687 ++ SPDK_RUN_UBSAN=1 00:01:28.687 ++ SPDK_TEST_XNVME=1 00:01:28.687 ++ SPDK_TEST_NVME_FDP=1 00:01:28.687 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.687 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:28.687 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.687 ++ RUN_NIGHTLY=1 00:01:28.687 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:28.687 + nvme_files=() 00:01:28.687 + declare -A nvme_files 00:01:28.687 + backend_dir=/var/lib/libvirt/images/backends 00:01:28.687 + nvme_files['nvme.img']=5G 00:01:28.687 + nvme_files['nvme-cmb.img']=5G 00:01:28.687 + nvme_files['nvme-multi0.img']=4G 00:01:28.687 + nvme_files['nvme-multi1.img']=4G 00:01:28.687 + nvme_files['nvme-multi2.img']=4G 00:01:28.687 + nvme_files['nvme-openstack.img']=8G 00:01:28.687 + nvme_files['nvme-zns.img']=5G 00:01:28.687 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:28.687 + (( SPDK_TEST_FTL == 1 )) 00:01:28.687 + nvme_files["nvme-ftl.img"]=6G 00:01:28.687 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:28.687 + nvme_files["nvme-fdp.img"]=1G 00:01:28.687 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:28.687 + for nvme in "${!nvme_files[@]}" 00:01:28.687 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:28.687 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.687 + for nvme in "${!nvme_files[@]}" 00:01:28.687 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:28.687 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:28.687 + for nvme in "${!nvme_files[@]}" 00:01:28.687 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:28.687 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.946 + for nvme in "${!nvme_files[@]}" 00:01:28.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:28.946 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:28.946 + for nvme in "${!nvme_files[@]}" 00:01:28.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:28.946 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.946 + for nvme in "${!nvme_files[@]}" 00:01:28.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:28.946 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.946 + for nvme in "${!nvme_files[@]}" 00:01:28.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:28.946 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.946 + for nvme in "${!nvme_files[@]}" 00:01:28.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:29.205 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:29.205 + for nvme in "${!nvme_files[@]}" 00:01:29.205 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:29.773 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.773 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:29.773 + echo 'End stage prepare_nvme.sh' 00:01:29.773 End stage prepare_nvme.sh 00:01:29.786 [Pipeline] sh 00:01:30.067 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.067 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:30.067 00:01:30.067 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:30.067 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:30.067 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:30.067 HELP=0 00:01:30.067 DRY_RUN=0 00:01:30.067 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:30.067 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:30.067 NVME_AUTO_CREATE=0 00:01:30.068 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:30.068 NVME_CMB=,,,, 00:01:30.068 NVME_PMR=,,,, 00:01:30.068 NVME_ZNS=,,,, 00:01:30.068 NVME_MS=true,,,, 00:01:30.068 NVME_FDP=,,,on, 00:01:30.068 SPDK_VAGRANT_DISTRO=fedora38 00:01:30.068 SPDK_VAGRANT_VMCPU=10 00:01:30.068 SPDK_VAGRANT_VMRAM=12288 00:01:30.068 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.068 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:30.068 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.068 SPDK_OPENSTACK_NETWORK=0 00:01:30.068 VAGRANT_PACKAGE_BOX=0 00:01:30.068 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:30.068 FORCE_DISTRO=true 00:01:30.068 VAGRANT_BOX_VERSION= 00:01:30.068 EXTRA_VAGRANTFILES= 00:01:30.068 NIC_MODEL=e1000 00:01:30.068 00:01:30.068 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:01:30.068 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:33.356 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.924 ==> default: Creating image (snapshot of base box volume). 00:01:34.183 ==> default: Creating domain with the following settings... 00:01:34.184 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721813230_2d7245076c243a9f2eb0 00:01:34.184 ==> default: -- Domain type: kvm 00:01:34.184 ==> default: -- Cpus: 10 00:01:34.184 ==> default: -- Feature: acpi 00:01:34.184 ==> default: -- Feature: apic 00:01:34.184 ==> default: -- Feature: pae 00:01:34.184 ==> default: -- Memory: 12288M 00:01:34.184 ==> default: -- Memory Backing: hugepages: 00:01:34.184 ==> default: -- Management MAC: 00:01:34.184 ==> default: -- Loader: 00:01:34.184 ==> default: -- Nvram: 00:01:34.184 ==> default: -- Base box: spdk/fedora38 00:01:34.184 ==> default: -- Storage pool: default 00:01:34.184 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721813230_2d7245076c243a9f2eb0.img (20G) 00:01:34.184 ==> default: -- Volume Cache: default 00:01:34.184 ==> default: -- Kernel: 00:01:34.184 ==> default: -- Initrd: 00:01:34.184 ==> default: -- Graphics Type: vnc 00:01:34.184 ==> default: -- Graphics Port: -1 00:01:34.184 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.184 ==> default: -- Graphics Password: Not defined 00:01:34.184 ==> default: -- Video Type: cirrus 00:01:34.184 ==> default: -- Video VRAM: 9216 00:01:34.184 ==> default: -- Sound Type: 00:01:34.184 ==> default: -- Keymap: en-us 00:01:34.184 ==> default: -- TPM Path: 00:01:34.184 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.184 ==> default: -- Command line args: 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:34.184 ==> default: -> value=-drive, 00:01:34.184 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:34.184 ==> default: -> value=-device, 00:01:34.184 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.444 ==> default: Creating shared folders metadata... 00:01:34.445 ==> default: Starting domain. 00:01:37.027 ==> default: Waiting for domain to get an IP address... 00:01:55.164 ==> default: Waiting for SSH to become available... 00:01:55.164 ==> default: Configuring and enabling network interfaces... 00:01:59.374 default: SSH address: 192.168.121.55:22 00:01:59.374 default: SSH username: vagrant 00:01:59.374 default: SSH auth method: private key 00:02:02.658 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.831 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:16.182 ==> default: Mounting SSHFS shared folder... 00:02:18.737 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.737 ==> default: Checking Mount.. 00:02:20.640 ==> default: Folder Successfully Mounted! 00:02:20.640 ==> default: Running provisioner: file... 00:02:21.574 default: ~/.gitconfig => .gitconfig 00:02:22.139 00:02:22.139 SUCCESS! 00:02:22.139 00:02:22.139 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:22.139 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:22.139 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:22.139 00:02:22.149 [Pipeline] } 00:02:22.169 [Pipeline] // stage 00:02:22.180 [Pipeline] dir 00:02:22.180 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:02:22.182 [Pipeline] { 00:02:22.198 [Pipeline] catchError 00:02:22.200 [Pipeline] { 00:02:22.214 [Pipeline] sh 00:02:22.492 + vagrant ssh-config --host vagrant 00:02:22.492 + sed -ne /^Host/,$p 00:02:22.492 + tee ssh_conf 00:02:25.785 Host vagrant 00:02:25.785 HostName 192.168.121.55 00:02:25.785 User vagrant 00:02:25.785 Port 22 00:02:25.785 UserKnownHostsFile /dev/null 00:02:25.785 StrictHostKeyChecking no 00:02:25.785 PasswordAuthentication no 00:02:25.785 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:25.785 IdentitiesOnly yes 00:02:25.785 LogLevel FATAL 00:02:25.785 ForwardAgent yes 00:02:25.785 ForwardX11 yes 00:02:25.785 00:02:25.798 [Pipeline] withEnv 00:02:25.800 [Pipeline] { 00:02:25.815 [Pipeline] sh 00:02:26.095 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:26.095 source /etc/os-release 00:02:26.095 [[ -e /image.version ]] && img=$(< /image.version) 00:02:26.095 # Minimal, systemd-like check. 00:02:26.095 if [[ -e /.dockerenv ]]; then 00:02:26.095 # Clear garbage from the node's name: 00:02:26.095 # agt-er_autotest_547-896 -> autotest_547-896 00:02:26.095 # $HOSTNAME is the actual container id 00:02:26.095 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:26.095 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:26.095 # We can assume this is a mount from a host where container is running, 00:02:26.095 # so fetch its hostname to easily identify the target swarm worker. 00:02:26.095 container="$(< /etc/hostname) ($agent)" 00:02:26.095 else 00:02:26.095 # Fallback 00:02:26.095 container=$agent 00:02:26.095 fi 00:02:26.095 fi 00:02:26.095 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:26.095 00:02:26.366 [Pipeline] } 00:02:26.386 [Pipeline] // withEnv 00:02:26.395 [Pipeline] setCustomBuildProperty 00:02:26.411 [Pipeline] stage 00:02:26.414 [Pipeline] { (Tests) 00:02:26.432 [Pipeline] sh 00:02:26.715 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.987 [Pipeline] sh 00:02:27.270 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:27.545 [Pipeline] timeout 00:02:27.545 Timeout set to expire in 40 min 00:02:27.548 [Pipeline] { 00:02:27.567 [Pipeline] sh 00:02:27.852 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:28.418 HEAD is now at 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:02:28.430 [Pipeline] sh 00:02:28.710 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:28.982 [Pipeline] sh 00:02:29.266 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:29.542 [Pipeline] sh 00:02:29.825 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:30.084 ++ readlink -f spdk_repo 00:02:30.084 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:30.084 + [[ -n /home/vagrant/spdk_repo ]] 00:02:30.084 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:30.084 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:30.084 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:30.084 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:30.084 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:30.084 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:30.084 + cd /home/vagrant/spdk_repo 00:02:30.084 + source /etc/os-release 00:02:30.084 ++ NAME='Fedora Linux' 00:02:30.084 ++ VERSION='38 (Cloud Edition)' 00:02:30.084 ++ ID=fedora 00:02:30.084 ++ VERSION_ID=38 00:02:30.084 ++ VERSION_CODENAME= 00:02:30.084 ++ PLATFORM_ID=platform:f38 00:02:30.084 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:30.084 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:30.084 ++ LOGO=fedora-logo-icon 00:02:30.084 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:30.084 ++ HOME_URL=https://fedoraproject.org/ 00:02:30.084 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:30.084 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:30.084 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:30.084 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:30.084 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:30.084 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:30.084 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:30.084 ++ SUPPORT_END=2024-05-14 00:02:30.084 ++ VARIANT='Cloud Edition' 00:02:30.084 ++ VARIANT_ID=cloud 00:02:30.084 + uname -a 00:02:30.084 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:30.084 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:30.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:30.910 Hugepages 00:02:30.910 node hugesize free / total 00:02:30.910 node0 1048576kB 0 / 0 00:02:30.910 node0 2048kB 0 / 0 00:02:30.910 00:02:30.910 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.910 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:30.910 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:30.910 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:30.910 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:30.910 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:30.910 + rm -f /tmp/spdk-ld-path 00:02:30.910 + source autorun-spdk.conf 00:02:30.910 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.910 ++ SPDK_TEST_NVME=1 00:02:30.910 ++ SPDK_TEST_FTL=1 00:02:30.910 ++ SPDK_TEST_ISAL=1 00:02:30.910 ++ SPDK_RUN_ASAN=1 00:02:30.910 ++ SPDK_RUN_UBSAN=1 00:02:30.910 ++ SPDK_TEST_XNVME=1 00:02:30.910 ++ SPDK_TEST_NVME_FDP=1 00:02:30.910 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:30.910 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:30.910 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.910 ++ RUN_NIGHTLY=1 00:02:30.910 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:30.910 + [[ -n '' ]] 00:02:30.910 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:30.910 + for M in /var/spdk/build-*-manifest.txt 00:02:30.910 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:30.910 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.910 + for M in /var/spdk/build-*-manifest.txt 00:02:30.910 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:30.910 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.910 ++ uname 00:02:30.910 + [[ Linux == \L\i\n\u\x ]] 00:02:30.910 + sudo dmesg -T 00:02:31.169 + sudo dmesg --clear 00:02:31.169 + dmesg_pid=5886 00:02:31.169 + [[ Fedora Linux == FreeBSD ]] 00:02:31.169 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.169 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.169 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.169 + sudo dmesg -Tw 00:02:31.169 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.170 + export FIO_BIN=/usr/src/fio-static/fio 00:02:31.170 + FIO_BIN=/usr/src/fio-static/fio 00:02:31.170 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.170 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.170 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.170 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.170 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.170 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.170 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.170 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.170 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:31.170 Test configuration: 00:02:31.170 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.170 SPDK_TEST_NVME=1 00:02:31.170 SPDK_TEST_FTL=1 00:02:31.170 SPDK_TEST_ISAL=1 00:02:31.170 SPDK_RUN_ASAN=1 00:02:31.170 SPDK_RUN_UBSAN=1 00:02:31.170 SPDK_TEST_XNVME=1 00:02:31.170 SPDK_TEST_NVME_FDP=1 00:02:31.170 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:31.170 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.170 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.170 RUN_NIGHTLY=1 09:28:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:31.170 09:28:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:31.170 09:28:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.170 09:28:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.170 09:28:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.170 09:28:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.170 09:28:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.170 09:28:08 -- paths/export.sh@5 -- $ export PATH 00:02:31.170 09:28:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.170 09:28:08 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:31.170 09:28:08 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:31.170 09:28:08 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721813288.XXXXXX 00:02:31.170 09:28:08 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721813288.gOPkmw 00:02:31.170 09:28:08 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:31.170 09:28:08 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:02:31.170 09:28:08 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.170 09:28:08 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:31.170 09:28:08 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:31.170 09:28:08 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:31.170 09:28:08 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:31.170 09:28:08 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:31.170 09:28:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.170 09:28:08 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:31.170 09:28:08 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:31.170 09:28:08 -- pm/common@17 -- $ local monitor 00:02:31.170 09:28:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.170 09:28:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.170 09:28:08 -- pm/common@25 -- $ sleep 1 00:02:31.170 09:28:08 -- pm/common@21 -- $ date +%s 00:02:31.170 09:28:08 -- pm/common@21 -- $ date +%s 00:02:31.170 09:28:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721813288 00:02:31.429 09:28:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721813288 00:02:31.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721813288_collect-vmstat.pm.log 00:02:31.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721813288_collect-cpu-load.pm.log 00:02:32.366 09:28:09 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:32.366 09:28:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.366 09:28:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.366 09:28:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.366 09:28:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.366 Wed Jul 24 09:28:09 AM UTC 2024 00:02:32.366 09:28:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.366 v24.09-pre-311-g8711e7e9b 00:02:32.366 09:28:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:32.366 09:28:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:32.366 09:28:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:32.366 09:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:32.366 09:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.366 ************************************ 00:02:32.366 START TEST asan 00:02:32.366 ************************************ 00:02:32.366 using asan 00:02:32.366 09:28:10 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:32.366 00:02:32.366 real 0m0.000s 00:02:32.366 user 0m0.000s 00:02:32.366 sys 0m0.000s 00:02:32.366 09:28:10 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:32.366 ************************************ 00:02:32.366 END TEST asan 00:02:32.366 09:28:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:32.366 ************************************ 00:02:32.366 09:28:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.366 09:28:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.366 09:28:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:32.366 09:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:32.366 09:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.366 ************************************ 00:02:32.366 START TEST ubsan 00:02:32.366 ************************************ 00:02:32.366 using ubsan 00:02:32.366 09:28:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:32.366 00:02:32.366 real 0m0.000s 00:02:32.366 user 0m0.000s 00:02:32.366 sys 0m0.000s 00:02:32.366 09:28:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:32.366 09:28:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:32.366 ************************************ 00:02:32.367 END TEST ubsan 00:02:32.367 ************************************ 00:02:32.367 09:28:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:32.367 09:28:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:32.367 09:28:10 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:32.367 09:28:10 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:32.367 09:28:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:32.367 09:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.367 ************************************ 00:02:32.367 START TEST build_native_dpdk 00:02:32.367 ************************************ 00:02:32.367 09:28:10 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:32.367 09:28:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:32.626 eeb0605f11 version: 23.11.0 00:02:32.626 238778122a doc: update release notes for 23.11 00:02:32.626 46aa6b3cfc doc: fix description of RSS features 00:02:32.626 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:32.626 7e421ae345 devtools: support skipping forbid rule check 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:32.626 patching file config/rte_config.h 00:02:32.626 Hunk #1 succeeded at 60 (offset 1 line). 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:32.626 09:28:10 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:32.626 patching file lib/pcapng/rte_pcapng.c 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:32.626 09:28:10 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.898 The Meson build system 00:02:37.898 Version: 1.3.1 00:02:37.898 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.898 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.898 Build type: native build 00:02:37.898 Program cat found: YES (/usr/bin/cat) 00:02:37.898 Project name: DPDK 00:02:37.898 Project version: 23.11.0 00:02:37.898 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.898 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:37.898 Host machine cpu family: x86_64 00:02:37.898 Host machine cpu: x86_64 00:02:37.898 Message: ## Building in Developer Mode ## 00:02:37.898 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.898 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.898 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.898 Program python3 found: YES (/usr/bin/python3) 00:02:37.898 Program cat found: YES (/usr/bin/cat) 00:02:37.898 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.898 Compiler for C supports arguments -march=native: YES 00:02:37.898 Checking for size of "void *" : 8 00:02:37.898 Checking for size of "void *" : 8 (cached) 00:02:37.898 Library m found: YES 00:02:37.898 Library numa found: YES 00:02:37.898 Has header "numaif.h" : YES 00:02:37.898 Library fdt found: NO 00:02:37.898 Library execinfo found: NO 00:02:37.898 Has header "execinfo.h" : YES 00:02:37.898 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.898 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency openssl found: YES 3.0.9 00:02:37.898 Run-time dependency libpcap found: YES 1.10.4 00:02:37.898 Has header "pcap.h" with dependency libpcap: YES 00:02:37.898 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.898 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.898 Compiler for C supports arguments -Wformat: YES 00:02:37.898 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.898 Compiler for C supports arguments -Wformat-security: NO 00:02:37.898 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.898 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.898 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.898 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.898 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.898 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.898 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.898 Compiler for C supports arguments -Wundef: YES 00:02:37.898 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.898 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.898 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.898 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.898 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.898 Program objdump found: YES (/usr/bin/objdump) 00:02:37.898 Compiler for C supports arguments -mavx512f: YES 00:02:37.898 Checking if "AVX512 checking" compiles: YES 00:02:37.898 Fetching value of define "__SSE4_2__" : 1 00:02:37.898 Fetching value of define "__AES__" : 1 00:02:37.898 Fetching value of define "__AVX__" : 1 00:02:37.898 Fetching value of define "__AVX2__" : 1 00:02:37.898 Fetching value of define "__AVX512BW__" : 1 00:02:37.898 Fetching value of define "__AVX512CD__" : 1 00:02:37.898 Fetching value of define "__AVX512DQ__" : 1 00:02:37.898 Fetching value of define "__AVX512F__" : 1 00:02:37.898 Fetching value of define "__AVX512VL__" : 1 00:02:37.898 Fetching value of define "__PCLMUL__" : 1 00:02:37.898 Fetching value of define "__RDRND__" : 1 00:02:37.898 Fetching value of define "__RDSEED__" : 1 00:02:37.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.898 Fetching value of define "__znver1__" : (undefined) 00:02:37.898 Fetching value of define "__znver2__" : (undefined) 00:02:37.898 Fetching value of define "__znver3__" : (undefined) 00:02:37.898 Fetching value of define "__znver4__" : (undefined) 00:02:37.898 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.898 Message: lib/log: Defining dependency "log" 00:02:37.898 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.898 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.898 Checking for function "getentropy" : NO 00:02:37.898 Message: lib/eal: Defining dependency "eal" 00:02:37.898 Message: lib/ring: Defining dependency "ring" 00:02:37.898 Message: lib/rcu: Defining dependency "rcu" 00:02:37.898 Message: lib/mempool: Defining dependency "mempool" 00:02:37.898 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.898 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.898 Compiler for C supports arguments -mpclmul: YES 00:02:37.898 Compiler for C supports arguments -maes: YES 00:02:37.898 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.898 Compiler for C supports arguments -mavx512bw: YES 00:02:37.898 Compiler for C supports arguments -mavx512dq: YES 00:02:37.898 Compiler for C supports arguments -mavx512vl: YES 00:02:37.898 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.898 Compiler for C supports arguments -mavx2: YES 00:02:37.898 Compiler for C supports arguments -mavx: YES 00:02:37.898 Message: lib/net: Defining dependency "net" 00:02:37.898 Message: lib/meter: Defining dependency "meter" 00:02:37.898 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.898 Message: lib/pci: Defining dependency "pci" 00:02:37.898 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.898 Message: lib/metrics: Defining dependency "metrics" 00:02:37.898 Message: lib/hash: Defining dependency "hash" 00:02:37.898 Message: lib/timer: Defining dependency "timer" 00:02:37.898 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.898 Message: lib/acl: Defining dependency "acl" 00:02:37.898 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.898 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.898 Run-time dependency libelf found: YES 0.190 00:02:37.898 Message: lib/bpf: Defining dependency "bpf" 00:02:37.898 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.898 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.898 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.898 Message: lib/distributor: Defining dependency "distributor" 00:02:37.898 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.898 Message: lib/efd: Defining dependency "efd" 00:02:37.898 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.898 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:37.898 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.898 Message: lib/gro: Defining dependency "gro" 00:02:37.898 Message: lib/gso: Defining dependency "gso" 00:02:37.898 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.898 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.898 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.898 Message: lib/lpm: Defining dependency "lpm" 00:02:37.898 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.898 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.898 Message: lib/member: Defining dependency "member" 00:02:37.898 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.898 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.898 Message: lib/power: Defining dependency "power" 00:02:37.898 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.898 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.898 Message: lib/mldev: Defining dependency "mldev" 00:02:37.898 Message: lib/rib: Defining dependency "rib" 00:02:37.898 Message: lib/reorder: Defining dependency "reorder" 00:02:37.898 Message: lib/sched: Defining dependency "sched" 00:02:37.898 Message: lib/security: Defining dependency "security" 00:02:37.898 Message: lib/stack: Defining dependency "stack" 00:02:37.898 Has header "linux/userfaultfd.h" : YES 00:02:37.898 Has header "linux/vduse.h" : YES 00:02:37.899 Message: lib/vhost: Defining dependency "vhost" 00:02:37.899 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.899 Message: lib/pdcp: Defining dependency "pdcp" 00:02:37.899 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.899 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.899 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.899 Message: lib/fib: Defining dependency "fib" 00:02:37.899 Message: lib/port: Defining dependency "port" 00:02:37.899 Message: lib/pdump: Defining dependency "pdump" 00:02:37.899 Message: lib/table: Defining dependency "table" 00:02:37.899 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.899 Message: lib/graph: Defining dependency "graph" 00:02:37.899 Message: lib/node: Defining dependency "node" 00:02:37.899 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.899 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.899 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.798 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.798 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.798 Compiler for C supports arguments -Wno-unused-value: YES 00:02:39.798 Compiler for C supports arguments -Wno-format: YES 00:02:39.798 Compiler for C supports arguments -Wno-format-security: YES 00:02:39.798 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.798 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.798 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.798 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.798 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.798 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.798 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.798 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.798 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.798 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.798 Has header "sys/epoll.h" : YES 00:02:39.798 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.798 Configuring doxy-api-html.conf using configuration 00:02:39.798 Configuring doxy-api-man.conf using configuration 00:02:39.798 Program mandb found: YES (/usr/bin/mandb) 00:02:39.798 Program sphinx-build found: NO 00:02:39.798 Configuring rte_build_config.h using configuration 00:02:39.798 Message: 00:02:39.798 ================= 00:02:39.798 Applications Enabled 00:02:39.798 ================= 00:02:39.798 00:02:39.799 apps: 00:02:39.799 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:39.799 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:39.799 test-pmd, test-regex, test-sad, test-security-perf, 00:02:39.799 00:02:39.799 Message: 00:02:39.799 ================= 00:02:39.799 Libraries Enabled 00:02:39.799 ================= 00:02:39.799 00:02:39.799 libs: 00:02:39.799 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.799 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:39.799 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:39.799 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:39.799 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:39.799 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:39.799 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:39.799 00:02:39.799 00:02:39.799 Message: 00:02:39.799 =============== 00:02:39.799 Drivers Enabled 00:02:39.799 =============== 00:02:39.799 00:02:39.799 common: 00:02:39.799 00:02:39.799 bus: 00:02:39.799 pci, vdev, 00:02:39.799 mempool: 00:02:39.799 ring, 00:02:39.799 dma: 00:02:39.799 00:02:39.799 net: 00:02:39.799 i40e, 00:02:39.799 raw: 00:02:39.799 00:02:39.799 crypto: 00:02:39.799 00:02:39.799 compress: 00:02:39.799 00:02:39.799 regex: 00:02:39.799 00:02:39.799 ml: 00:02:39.799 00:02:39.799 vdpa: 00:02:39.799 00:02:39.799 event: 00:02:39.799 00:02:39.799 baseband: 00:02:39.799 00:02:39.799 gpu: 00:02:39.799 00:02:39.799 00:02:39.799 Message: 00:02:39.799 ================= 00:02:39.799 Content Skipped 00:02:39.799 ================= 00:02:39.799 00:02:39.799 apps: 00:02:39.799 00:02:39.799 libs: 00:02:39.799 00:02:39.799 drivers: 00:02:39.799 common/cpt: not in enabled drivers build config 00:02:39.799 common/dpaax: not in enabled drivers build config 00:02:39.799 common/iavf: not in enabled drivers build config 00:02:39.799 common/idpf: not in enabled drivers build config 00:02:39.799 common/mvep: not in enabled drivers build config 00:02:39.799 common/octeontx: not in enabled drivers build config 00:02:39.799 bus/auxiliary: not in enabled drivers build config 00:02:39.799 bus/cdx: not in enabled drivers build config 00:02:39.799 bus/dpaa: not in enabled drivers build config 00:02:39.799 bus/fslmc: not in enabled drivers build config 00:02:39.799 bus/ifpga: not in enabled drivers build config 00:02:39.799 bus/platform: not in enabled drivers build config 00:02:39.799 bus/vmbus: not in enabled drivers build config 00:02:39.799 common/cnxk: not in enabled drivers build config 00:02:39.799 common/mlx5: not in enabled drivers build config 00:02:39.799 common/nfp: not in enabled drivers build config 00:02:39.799 common/qat: not in enabled drivers build config 00:02:39.799 common/sfc_efx: not in enabled drivers build config 00:02:39.799 mempool/bucket: not in enabled drivers build config 00:02:39.799 mempool/cnxk: not in enabled drivers build config 00:02:39.799 mempool/dpaa: not in enabled drivers build config 00:02:39.799 mempool/dpaa2: not in enabled drivers build config 00:02:39.799 mempool/octeontx: not in enabled drivers build config 00:02:39.799 mempool/stack: not in enabled drivers build config 00:02:39.799 dma/cnxk: not in enabled drivers build config 00:02:39.799 dma/dpaa: not in enabled drivers build config 00:02:39.799 dma/dpaa2: not in enabled drivers build config 00:02:39.799 dma/hisilicon: not in enabled drivers build config 00:02:39.799 dma/idxd: not in enabled drivers build config 00:02:39.799 dma/ioat: not in enabled drivers build config 00:02:39.799 dma/skeleton: not in enabled drivers build config 00:02:39.799 net/af_packet: not in enabled drivers build config 00:02:39.799 net/af_xdp: not in enabled drivers build config 00:02:39.799 net/ark: not in enabled drivers build config 00:02:39.799 net/atlantic: not in enabled drivers build config 00:02:39.799 net/avp: not in enabled drivers build config 00:02:39.799 net/axgbe: not in enabled drivers build config 00:02:39.799 net/bnx2x: not in enabled drivers build config 00:02:39.799 net/bnxt: not in enabled drivers build config 00:02:39.799 net/bonding: not in enabled drivers build config 00:02:39.799 net/cnxk: not in enabled drivers build config 00:02:39.799 net/cpfl: not in enabled drivers build config 00:02:39.799 net/cxgbe: not in enabled drivers build config 00:02:39.799 net/dpaa: not in enabled drivers build config 00:02:39.799 net/dpaa2: not in enabled drivers build config 00:02:39.799 net/e1000: not in enabled drivers build config 00:02:39.799 net/ena: not in enabled drivers build config 00:02:39.799 net/enetc: not in enabled drivers build config 00:02:39.799 net/enetfec: not in enabled drivers build config 00:02:39.799 net/enic: not in enabled drivers build config 00:02:39.799 net/failsafe: not in enabled drivers build config 00:02:39.799 net/fm10k: not in enabled drivers build config 00:02:39.799 net/gve: not in enabled drivers build config 00:02:39.799 net/hinic: not in enabled drivers build config 00:02:39.799 net/hns3: not in enabled drivers build config 00:02:39.799 net/iavf: not in enabled drivers build config 00:02:39.799 net/ice: not in enabled drivers build config 00:02:39.799 net/idpf: not in enabled drivers build config 00:02:39.799 net/igc: not in enabled drivers build config 00:02:39.799 net/ionic: not in enabled drivers build config 00:02:39.799 net/ipn3ke: not in enabled drivers build config 00:02:39.799 net/ixgbe: not in enabled drivers build config 00:02:39.799 net/mana: not in enabled drivers build config 00:02:39.799 net/memif: not in enabled drivers build config 00:02:39.799 net/mlx4: not in enabled drivers build config 00:02:39.799 net/mlx5: not in enabled drivers build config 00:02:39.799 net/mvneta: not in enabled drivers build config 00:02:39.799 net/mvpp2: not in enabled drivers build config 00:02:39.799 net/netvsc: not in enabled drivers build config 00:02:39.799 net/nfb: not in enabled drivers build config 00:02:39.799 net/nfp: not in enabled drivers build config 00:02:39.799 net/ngbe: not in enabled drivers build config 00:02:39.799 net/null: not in enabled drivers build config 00:02:39.799 net/octeontx: not in enabled drivers build config 00:02:39.799 net/octeon_ep: not in enabled drivers build config 00:02:39.799 net/pcap: not in enabled drivers build config 00:02:39.799 net/pfe: not in enabled drivers build config 00:02:39.799 net/qede: not in enabled drivers build config 00:02:39.799 net/ring: not in enabled drivers build config 00:02:39.799 net/sfc: not in enabled drivers build config 00:02:39.799 net/softnic: not in enabled drivers build config 00:02:39.799 net/tap: not in enabled drivers build config 00:02:39.799 net/thunderx: not in enabled drivers build config 00:02:39.799 net/txgbe: not in enabled drivers build config 00:02:39.799 net/vdev_netvsc: not in enabled drivers build config 00:02:39.799 net/vhost: not in enabled drivers build config 00:02:39.799 net/virtio: not in enabled drivers build config 00:02:39.799 net/vmxnet3: not in enabled drivers build config 00:02:39.799 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.799 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.799 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.799 raw/ifpga: not in enabled drivers build config 00:02:39.799 raw/ntb: not in enabled drivers build config 00:02:39.799 raw/skeleton: not in enabled drivers build config 00:02:39.799 crypto/armv8: not in enabled drivers build config 00:02:39.799 crypto/bcmfs: not in enabled drivers build config 00:02:39.799 crypto/caam_jr: not in enabled drivers build config 00:02:39.799 crypto/ccp: not in enabled drivers build config 00:02:39.799 crypto/cnxk: not in enabled drivers build config 00:02:39.799 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.799 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.799 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.799 crypto/mlx5: not in enabled drivers build config 00:02:39.799 crypto/mvsam: not in enabled drivers build config 00:02:39.799 crypto/nitrox: not in enabled drivers build config 00:02:39.799 crypto/null: not in enabled drivers build config 00:02:39.799 crypto/octeontx: not in enabled drivers build config 00:02:39.799 crypto/openssl: not in enabled drivers build config 00:02:39.799 crypto/scheduler: not in enabled drivers build config 00:02:39.799 crypto/uadk: not in enabled drivers build config 00:02:39.799 crypto/virtio: not in enabled drivers build config 00:02:39.799 compress/isal: not in enabled drivers build config 00:02:39.799 compress/mlx5: not in enabled drivers build config 00:02:39.799 compress/octeontx: not in enabled drivers build config 00:02:39.799 compress/zlib: not in enabled drivers build config 00:02:39.799 regex/mlx5: not in enabled drivers build config 00:02:39.799 regex/cn9k: not in enabled drivers build config 00:02:39.799 ml/cnxk: not in enabled drivers build config 00:02:39.799 vdpa/ifc: not in enabled drivers build config 00:02:39.799 vdpa/mlx5: not in enabled drivers build config 00:02:39.799 vdpa/nfp: not in enabled drivers build config 00:02:39.799 vdpa/sfc: not in enabled drivers build config 00:02:39.799 event/cnxk: not in enabled drivers build config 00:02:39.799 event/dlb2: not in enabled drivers build config 00:02:39.799 event/dpaa: not in enabled drivers build config 00:02:39.799 event/dpaa2: not in enabled drivers build config 00:02:39.799 event/dsw: not in enabled drivers build config 00:02:39.799 event/opdl: not in enabled drivers build config 00:02:39.799 event/skeleton: not in enabled drivers build config 00:02:39.799 event/sw: not in enabled drivers build config 00:02:39.799 event/octeontx: not in enabled drivers build config 00:02:39.799 baseband/acc: not in enabled drivers build config 00:02:39.799 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.799 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.799 baseband/la12xx: not in enabled drivers build config 00:02:39.799 baseband/null: not in enabled drivers build config 00:02:39.799 baseband/turbo_sw: not in enabled drivers build config 00:02:39.799 gpu/cuda: not in enabled drivers build config 00:02:39.799 00:02:39.799 00:02:39.799 Build targets in project: 217 00:02:39.799 00:02:39.799 DPDK 23.11.0 00:02:39.799 00:02:39.799 User defined options 00:02:39.799 libdir : lib 00:02:39.799 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.799 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.799 c_link_args : 00:02:39.799 enable_docs : false 00:02:39.799 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.799 enable_kmods : false 00:02:39.799 machine : native 00:02:39.799 tests : false 00:02:39.799 00:02:39.799 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.799 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.799 09:28:17 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.799 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.799 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.799 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.799 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.799 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.799 [5/707] Linking static target lib/librte_kvargs.a 00:02:40.057 [6/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.057 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.057 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.057 [9/707] Linking static target lib/librte_log.a 00:02:40.057 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.057 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.315 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.315 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.315 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.315 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.315 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.315 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.315 [18/707] Linking target lib/librte_log.so.24.0 00:02:40.573 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.573 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.573 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.573 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.573 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.573 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.573 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.832 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:40.832 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.832 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.832 [29/707] Linking static target lib/librte_telemetry.a 00:02:40.832 [30/707] Linking target lib/librte_kvargs.so.24.0 00:02:40.832 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.832 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:40.832 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.832 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.091 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.091 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.091 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.091 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.091 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.091 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.091 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.091 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.091 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:41.349 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.349 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:41.349 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.349 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.349 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.619 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.620 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.620 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.620 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.620 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.620 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.620 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.893 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.893 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.893 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.893 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.893 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.893 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.893 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.893 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.893 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.893 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.893 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.152 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.152 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.152 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.152 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.152 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.152 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.152 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.410 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.410 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.410 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.410 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.410 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.410 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.669 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.669 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.669 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.669 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.669 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.669 [85/707] Linking static target lib/librte_ring.a 00:02:42.669 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.927 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.927 [88/707] Linking static target lib/librte_eal.a 00:02:42.927 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.927 [90/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.927 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.927 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.927 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.185 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.185 [95/707] Linking static target lib/librte_mempool.a 00:02:43.185 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.185 [97/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.185 [98/707] Linking static target lib/librte_rcu.a 00:02:43.443 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.443 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.443 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.443 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.443 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.443 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.443 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.702 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.702 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.702 [108/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.702 [109/707] Linking static target lib/librte_mbuf.a 00:02:43.702 [110/707] Linking static target lib/librte_net.a 00:02:43.702 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.702 [112/707] Linking static target lib/librte_meter.a 00:02:43.702 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.960 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.960 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.960 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.960 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.960 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.219 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.219 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.219 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.785 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.785 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.785 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.785 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.785 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.785 [127/707] Linking static target lib/librte_pci.a 00:02:44.785 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.785 [129/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.785 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.785 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.785 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.785 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.044 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.044 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.044 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.044 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.044 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.044 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.044 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.044 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.044 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.044 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.044 [144/707] Linking static target lib/librte_cmdline.a 00:02:45.303 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.303 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:45.303 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:45.303 [148/707] Linking static target lib/librte_metrics.a 00:02:45.562 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.562 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.821 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.821 [152/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.821 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.821 [154/707] Linking static target lib/librte_timer.a 00:02:45.821 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.079 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:46.079 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.079 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:46.079 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:46.336 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:46.595 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:46.595 [162/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:46.595 [163/707] Linking static target lib/librte_bitratestats.a 00:02:46.855 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.855 [165/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:46.855 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:46.855 [167/707] Linking static target lib/librte_bbdev.a 00:02:46.855 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.114 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:47.114 [170/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.114 [171/707] Linking static target lib/librte_hash.a 00:02:47.372 [172/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.372 [173/707] Linking static target lib/librte_ethdev.a 00:02:47.372 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:47.372 [175/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.372 [176/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:47.372 [177/707] Linking static target lib/acl/libavx2_tmp.a 00:02:47.372 [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:47.630 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:47.630 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:47.630 [181/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.918 [182/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:47.918 [183/707] Linking static target lib/librte_cfgfile.a 00:02:47.918 [184/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:47.918 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:48.177 [186/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.177 [187/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.177 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:48.177 [189/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.436 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:48.436 [191/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.436 [192/707] Linking static target lib/librte_bpf.a 00:02:48.436 [193/707] Linking static target lib/librte_compressdev.a 00:02:48.436 [194/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.436 [195/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:48.436 [196/707] Linking static target lib/librte_acl.a 00:02:48.694 [197/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.694 [198/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.694 [199/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:48.694 [200/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:48.953 [201/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.953 [202/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:48.953 [203/707] Linking static target lib/librte_distributor.a 00:02:48.953 [204/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.953 [205/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.953 [206/707] Linking target lib/librte_eal.so.24.0 00:02:48.953 [207/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.211 [208/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.211 [209/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:49.211 [210/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.211 [211/707] Linking target lib/librte_ring.so.24.0 00:02:49.211 [212/707] Linking target lib/librte_meter.so.24.0 00:02:49.211 [213/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:49.211 [214/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.211 [215/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:49.211 [216/707] Linking target lib/librte_rcu.so.24.0 00:02:49.469 [217/707] Linking target lib/librte_mempool.so.24.0 00:02:49.469 [218/707] Linking target lib/librte_pci.so.24.0 00:02:49.469 [219/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:49.469 [220/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:49.469 [221/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:49.469 [222/707] Linking target lib/librte_timer.so.24.0 00:02:49.469 [223/707] Linking target lib/librte_mbuf.so.24.0 00:02:49.469 [224/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:49.469 [225/707] Linking target lib/librte_acl.so.24.0 00:02:49.469 [226/707] Linking target lib/librte_cfgfile.so.24.0 00:02:49.469 [227/707] Linking static target lib/librte_dmadev.a 00:02:49.469 [228/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:49.469 [229/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:49.728 [230/707] Linking target lib/librte_net.so.24.0 00:02:49.728 [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:49.728 [232/707] Linking target lib/librte_bbdev.so.24.0 00:02:49.728 [233/707] Linking target lib/librte_compressdev.so.24.0 00:02:49.728 [234/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:49.728 [235/707] Linking target lib/librte_distributor.so.24.0 00:02:49.728 [236/707] Linking target lib/librte_cmdline.so.24.0 00:02:49.728 [237/707] Linking target lib/librte_hash.so.24.0 00:02:49.728 [238/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:49.728 [239/707] Linking static target lib/librte_efd.a 00:02:49.728 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:49.728 [241/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.985 [242/707] Linking target lib/librte_dmadev.so.24.0 00:02:49.985 [243/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:49.985 [244/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.985 [245/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:49.985 [246/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.985 [247/707] Linking static target lib/librte_cryptodev.a 00:02:49.985 [248/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.985 [249/707] Linking target lib/librte_efd.so.24.0 00:02:50.243 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:50.500 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:50.500 [252/707] Linking static target lib/librte_dispatcher.a 00:02:50.500 [253/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:50.500 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:50.758 [255/707] Linking static target lib/librte_gpudev.a 00:02:50.758 [256/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:50.758 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:50.758 [258/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:50.758 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.016 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:51.016 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.275 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:51.275 [263/707] Linking target lib/librte_cryptodev.so.24.0 00:02:51.275 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:51.275 [265/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.275 [266/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:51.275 [267/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:51.275 [268/707] Linking static target lib/librte_eventdev.a 00:02:51.275 [269/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.275 [270/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:51.533 [271/707] Linking target lib/librte_gpudev.so.24.0 00:02:51.533 [272/707] Linking static target lib/librte_gro.a 00:02:51.533 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:51.533 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:51.533 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.533 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:51.533 [277/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.533 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:51.791 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:51.791 [280/707] Linking static target lib/librte_gso.a 00:02:51.791 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.791 [282/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.791 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.791 [284/707] Linking target lib/librte_ethdev.so.24.0 00:02:51.791 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:52.103 [286/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:52.103 [287/707] Linking static target lib/librte_jobstats.a 00:02:52.103 [288/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:52.103 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:52.103 [290/707] Linking target lib/librte_metrics.so.24.0 00:02:52.103 [291/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:52.103 [292/707] Linking target lib/librte_bpf.so.24.0 00:02:52.103 [293/707] Linking target lib/librte_gro.so.24.0 00:02:52.103 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:52.103 [295/707] Linking target lib/librte_gso.so.24.0 00:02:52.103 [296/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:52.103 [297/707] Linking static target lib/librte_ip_frag.a 00:02:52.103 [298/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:52.103 [299/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:52.103 [300/707] Linking target lib/librte_bitratestats.so.24.0 00:02:52.377 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.377 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:52.377 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:52.377 [304/707] Linking static target lib/librte_latencystats.a 00:02:52.377 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.377 [306/707] Linking target lib/librte_ip_frag.so.24.0 00:02:52.377 [307/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.377 [308/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:52.377 [309/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.635 [310/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.635 [311/707] Linking target lib/librte_latencystats.so.24.0 00:02:52.635 [312/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:52.635 [313/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.635 [314/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:52.635 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.635 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.892 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.892 [318/707] Linking static target lib/librte_lpm.a 00:02:52.892 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.892 [320/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.892 [321/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:53.150 [322/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:53.150 [323/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.150 [324/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:53.150 [325/707] Linking static target lib/librte_pcapng.a 00:02:53.150 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.150 [327/707] Linking target lib/librte_lpm.so.24.0 00:02:53.150 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.407 [329/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.407 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:53.407 [331/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.407 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.407 [333/707] Linking target lib/librte_eventdev.so.24.0 00:02:53.407 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.407 [335/707] Linking target lib/librte_pcapng.so.24.0 00:02:53.407 [336/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:53.407 [337/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:53.407 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:02:53.407 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.665 [340/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.665 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.665 [342/707] Linking static target lib/librte_rawdev.a 00:02:53.665 [343/707] Linking static target lib/librte_power.a 00:02:53.665 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.665 [345/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.665 [346/707] Linking static target lib/librte_regexdev.a 00:02:53.665 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:53.922 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.922 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.922 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:53.922 [351/707] Linking static target lib/librte_member.a 00:02:53.922 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:53.922 [353/707] Linking static target lib/librte_mldev.a 00:02:53.922 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.922 [355/707] Linking target lib/librte_rawdev.so.24.0 00:02:53.922 [356/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.180 [357/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.180 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.180 [359/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.180 [360/707] Linking target lib/librte_member.so.24.0 00:02:54.180 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.180 [362/707] Linking target lib/librte_power.so.24.0 00:02:54.180 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.180 [364/707] Linking static target lib/librte_reorder.a 00:02:54.180 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.438 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:54.439 [367/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.439 [368/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.439 [369/707] Linking static target lib/librte_rib.a 00:02:54.439 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.439 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.439 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.439 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.439 [374/707] Linking static target lib/librte_stack.a 00:02:54.439 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.697 [376/707] Linking target lib/librte_reorder.so.24.0 00:02:54.697 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:54.697 [378/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.697 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.697 [380/707] Linking static target lib/librte_security.a 00:02:54.697 [381/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.697 [382/707] Linking target lib/librte_stack.so.24.0 00:02:54.697 [383/707] Linking target lib/librte_rib.so.24.0 00:02:54.955 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.955 [385/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:54.955 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.955 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.213 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:55.213 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.213 [390/707] Linking target lib/librte_security.so.24.0 00:02:55.213 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.213 [392/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.213 [393/707] Linking static target lib/librte_sched.a 00:02:55.213 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:55.471 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.471 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.471 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.471 [398/707] Linking target lib/librte_sched.so.24.0 00:02:55.729 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.729 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:55.729 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:55.729 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.986 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.986 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.986 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:56.244 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:56.244 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.244 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:56.244 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:56.501 [410/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:56.501 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.501 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.501 [413/707] Linking static target lib/librte_ipsec.a 00:02:56.501 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:56.501 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:56.759 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.759 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:57.017 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:57.017 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:57.017 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:57.017 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:57.276 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:57.276 [423/707] Linking static target lib/librte_fib.a 00:02:57.276 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:57.276 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:57.276 [426/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:57.533 [427/707] Linking static target lib/librte_pdcp.a 00:02:57.533 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:57.533 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.533 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:57.533 [431/707] Linking target lib/librte_fib.so.24.0 00:02:57.533 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:57.791 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.791 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:58.049 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:58.049 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:58.307 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:58.307 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:58.307 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:58.307 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:58.565 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:58.565 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:58.565 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.565 [444/707] Linking static target lib/librte_port.a 00:02:58.823 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:58.823 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:58.823 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:58.823 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:59.121 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:59.121 [450/707] Linking static target lib/librte_pdump.a 00:02:59.121 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:59.121 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:59.121 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:59.121 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.121 [455/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.382 [456/707] Linking target lib/librte_port.so.24.0 00:02:59.382 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:59.382 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:59.640 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:59.640 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:59.640 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:59.640 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:59.640 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:59.899 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:59.899 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:59.899 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:00.157 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:00.157 [468/707] Linking static target lib/librte_table.a 00:03:00.157 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.415 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:00.415 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:00.672 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.672 [473/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:00.672 [474/707] Linking target lib/librte_table.so.24.0 00:03:00.672 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:00.930 [476/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:00.930 [477/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:00.930 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:01.187 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:01.187 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:01.187 [481/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:01.445 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:01.445 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:01.702 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:01.702 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:01.702 [486/707] Linking static target lib/librte_graph.a 00:03:01.702 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:01.702 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:01.702 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:01.960 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:02.218 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:02.218 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.218 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:02.477 [494/707] Linking target lib/librte_graph.so.24.0 00:03:02.477 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:02.477 [496/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:02.477 [497/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:02.735 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:02.735 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:02.735 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:02.735 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:02.735 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.994 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:02.994 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:03.252 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:03.252 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:03.252 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:03.252 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:03.252 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:03.252 [510/707] Linking static target lib/librte_node.a 00:03:03.252 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.253 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:03.511 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.511 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.511 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.769 [516/707] Linking target lib/librte_node.so.24.0 00:03:03.769 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:03.769 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.769 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.769 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.769 [521/707] Linking static target drivers/librte_bus_pci.a 00:03:03.769 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.769 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.769 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.769 [525/707] Linking static target drivers/librte_bus_vdev.a 00:03:04.027 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:04.027 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.028 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:04.028 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:04.028 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.286 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:04.286 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.286 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.286 [534/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.286 [535/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:04.286 [536/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:04.286 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.286 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.286 [539/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:04.286 [540/707] Linking static target drivers/librte_mempool_ring.a 00:03:04.286 [541/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.545 [542/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:04.545 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:04.545 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:04.804 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:05.062 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:05.062 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:05.663 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:05.663 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:05.920 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:05.920 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:05.920 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:05.920 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:06.178 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:06.178 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:06.436 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.436 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:06.436 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:06.695 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:06.695 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:06.952 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:07.211 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:07.211 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:07.469 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:07.469 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:07.469 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:07.469 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:07.727 [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:07.727 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:07.727 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:07.727 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:07.986 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:07.986 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:07.986 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:08.245 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:08.245 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:08.245 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:08.503 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:08.503 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:08.503 [580/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:08.503 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:08.761 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:08.761 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:08.761 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:08.761 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.761 [586/707] Linking static target drivers/librte_net_i40e.a 00:03:08.761 [587/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:08.761 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.020 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:09.020 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:09.277 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:09.277 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:09.535 [593/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.535 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:09.535 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:09.535 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:09.792 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.792 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:09.792 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:10.050 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:10.309 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:10.309 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:10.309 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:10.309 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:10.309 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:10.309 [606/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.309 [607/707] Linking static target lib/librte_vhost.a 00:03:10.309 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:10.599 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:10.599 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:10.599 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:10.859 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:10.859 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:10.859 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:10.859 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:11.216 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:11.216 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:11.475 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:11.734 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.734 [620/707] Linking target lib/librte_vhost.so.24.0 00:03:11.993 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:11.993 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:12.253 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:12.253 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:12.253 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:12.253 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:12.253 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:12.253 [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:12.512 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:12.512 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:12.512 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:12.512 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:12.512 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:12.772 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:12.772 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:12.772 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:12.772 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:13.030 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:13.030 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:13.289 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:13.289 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:13.289 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:13.289 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:13.548 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:13.548 [645/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:13.548 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:13.548 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:13.806 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:13.806 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:13.806 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:13.806 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:14.065 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:14.065 [653/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:14.065 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:14.324 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:14.324 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:14.584 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:14.584 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:14.584 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:14.842 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:14.842 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:14.842 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:15.102 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:15.102 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:15.102 [665/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:15.102 [666/707] Linking static target lib/librte_pipeline.a 00:03:15.361 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:15.620 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:15.620 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:15.620 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:15.620 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:15.878 [672/707] Linking target app/dpdk-dumpcap 00:03:15.878 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:15.878 [674/707] Linking target app/dpdk-graph 00:03:16.154 [675/707] Linking target app/dpdk-pdump 00:03:16.154 [676/707] Linking target app/dpdk-proc-info 00:03:16.154 [677/707] Linking target app/dpdk-test-acl 00:03:16.412 [678/707] Linking target app/dpdk-test-bbdev 00:03:16.412 [679/707] Linking target app/dpdk-test-cmdline 00:03:16.412 [680/707] Linking target app/dpdk-test-compress-perf 00:03:16.412 [681/707] Linking target app/dpdk-test-crypto-perf 00:03:16.412 [682/707] Linking target app/dpdk-test-dma-perf 00:03:16.671 [683/707] Linking target app/dpdk-test-eventdev 00:03:16.671 [684/707] Linking target app/dpdk-test-fib 00:03:16.671 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:16.671 [686/707] Linking target app/dpdk-test-flow-perf 00:03:16.928 [687/707] Linking target app/dpdk-test-gpudev 00:03:16.928 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:16.928 [689/707] Linking target app/dpdk-test-mldev 00:03:16.928 [690/707] Linking target app/dpdk-test-pipeline 00:03:17.187 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:17.187 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:17.444 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:17.444 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:17.702 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:17.702 [696/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:17.702 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:17.702 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:17.961 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:18.221 [700/707] Linking target app/dpdk-test-sad 00:03:18.221 [701/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:18.221 [702/707] Linking target app/dpdk-test-regex 00:03:18.479 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:18.738 [704/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.738 [705/707] Linking target app/dpdk-test-security-perf 00:03:18.738 [706/707] Linking target lib/librte_pipeline.so.24.0 00:03:18.995 [707/707] Linking target app/dpdk-testpmd 00:03:18.995 09:28:56 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:18.995 09:28:56 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:18.995 09:28:56 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:18.995 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:18.995 [0/1] Installing files. 00:03:19.256 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.257 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.258 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.259 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.260 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.261 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.262 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.262 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.521 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.522 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.783 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.783 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.783 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.783 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.783 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.786 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:19.786 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:19.786 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:19.786 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:19.786 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:19.786 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:19.786 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:19.786 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:19.786 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:19.786 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:19.786 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:19.786 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:19.786 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:19.786 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:19.786 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:19.786 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:19.786 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:19.786 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:19.786 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:19.786 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:19.786 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:19.786 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:19.786 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:19.786 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:19.786 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:19.787 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:19.787 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:19.787 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:19.787 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:19.787 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:19.787 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:19.787 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:19.787 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:19.787 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:19.787 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:19.787 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:19.787 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:19.787 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:19.787 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:19.787 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:19.787 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:19.787 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:19.787 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:19.787 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:19.787 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:19.787 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:19.787 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:19.787 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:19.787 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:19.787 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:19.787 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:19.787 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:19.787 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:19.787 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:19.787 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:19.787 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:19.787 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:19.787 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:19.787 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:19.787 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:19.787 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:19.787 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:19.787 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:19.787 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:19.787 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:19.787 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:19.787 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:19.787 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:19.787 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:19.787 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:19.787 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:19.787 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:19.787 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:19.787 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:19.787 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:19.787 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:19.787 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:19.787 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:19.787 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:19.787 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:19.787 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:19.787 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:19.787 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:19.787 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:19.787 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:19.787 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:19.787 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:19.787 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:19.787 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:19.787 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:19.787 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:19.787 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:19.787 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:19.787 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:19.787 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:19.787 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:19.787 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:19.787 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:19.787 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:19.787 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:19.787 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:19.787 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:19.787 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:19.787 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:19.787 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:19.787 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:19.787 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:19.787 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:19.787 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:19.787 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:19.787 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:19.787 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:19.787 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:19.787 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:19.787 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:19.787 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:19.787 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:19.787 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:19.787 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:19.787 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:19.787 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:19.787 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:19.787 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:19.787 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:19.787 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:19.787 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:19.787 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:19.787 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:19.787 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:19.787 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:19.788 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:19.788 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:19.788 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:19.788 09:28:57 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:19.788 09:28:57 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:19.788 00:03:19.788 real 0m47.422s 00:03:19.788 user 5m17.685s 00:03:19.788 sys 1m3.115s 00:03:19.788 09:28:57 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:19.788 09:28:57 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:19.788 ************************************ 00:03:19.788 END TEST build_native_dpdk 00:03:19.788 ************************************ 00:03:20.046 09:28:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.046 09:28:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.046 09:28:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:20.046 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:20.305 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.305 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:20.305 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:20.872 Using 'verbs' RDMA provider 00:03:40.858 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:53.145 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:53.713 Creating mk/config.mk...done. 00:03:53.713 Creating mk/cc.flags.mk...done. 00:03:53.713 Type 'make' to build. 00:03:53.713 09:29:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:53.713 09:29:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:53.713 09:29:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:53.713 09:29:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.713 ************************************ 00:03:53.713 START TEST make 00:03:53.713 ************************************ 00:03:53.713 09:29:31 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:53.972 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:53.972 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:53.972 meson setup builddir \ 00:03:53.972 -Dwith-libaio=enabled \ 00:03:53.972 -Dwith-liburing=enabled \ 00:03:53.972 -Dwith-libvfn=disabled \ 00:03:53.972 -Dwith-spdk=false && \ 00:03:53.972 meson compile -C builddir && \ 00:03:53.972 cd -) 00:03:53.972 make[1]: Nothing to be done for 'all'. 00:03:56.506 The Meson build system 00:03:56.506 Version: 1.3.1 00:03:56.506 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:56.506 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:56.506 Build type: native build 00:03:56.506 Project name: xnvme 00:03:56.506 Project version: 0.7.3 00:03:56.506 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:56.506 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:56.506 Host machine cpu family: x86_64 00:03:56.506 Host machine cpu: x86_64 00:03:56.506 Message: host_machine.system: linux 00:03:56.506 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:56.506 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:56.506 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:56.506 Run-time dependency threads found: YES 00:03:56.506 Has header "setupapi.h" : NO 00:03:56.506 Has header "linux/blkzoned.h" : YES 00:03:56.506 Has header "linux/blkzoned.h" : YES (cached) 00:03:56.506 Has header "libaio.h" : YES 00:03:56.506 Library aio found: YES 00:03:56.506 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:56.506 Run-time dependency liburing found: YES 2.2 00:03:56.506 Dependency libvfn skipped: feature with-libvfn disabled 00:03:56.506 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.506 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.506 Configuring xnvme_config.h using configuration 00:03:56.506 Configuring xnvme.spec using configuration 00:03:56.506 Run-time dependency bash-completion found: YES 2.11 00:03:56.506 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:56.506 Program cp found: YES (/usr/bin/cp) 00:03:56.506 Has header "winsock2.h" : NO 00:03:56.506 Has header "dbghelp.h" : NO 00:03:56.506 Library rpcrt4 found: NO 00:03:56.506 Library rt found: YES 00:03:56.506 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:56.506 Found CMake: /usr/bin/cmake (3.27.7) 00:03:56.506 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:56.506 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:56.506 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:56.506 Build targets in project: 32 00:03:56.506 00:03:56.506 xnvme 0.7.3 00:03:56.506 00:03:56.506 User defined options 00:03:56.506 with-libaio : enabled 00:03:56.506 with-liburing: enabled 00:03:56.506 with-libvfn : disabled 00:03:56.506 with-spdk : false 00:03:56.506 00:03:56.506 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.506 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:56.506 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:56.765 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:56.765 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:56.765 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:56.765 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:56.765 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:56.765 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:56.765 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:56.765 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:56.765 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:56.765 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:56.765 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:56.765 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:56.765 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:56.765 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:56.765 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:56.765 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:56.765 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:56.765 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:56.765 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:56.765 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:56.765 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:57.025 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:57.025 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:57.025 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:57.025 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:57.025 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:57.025 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:57.025 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:57.025 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:57.025 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:57.025 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:57.025 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:57.025 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:57.025 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:57.025 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:57.025 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:57.025 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:57.025 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:57.025 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:57.025 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:57.025 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:57.025 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:57.025 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:57.025 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:57.025 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:57.025 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:57.025 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:57.025 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:57.025 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:57.025 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:57.026 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:57.026 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:57.026 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:57.026 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:57.285 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:57.285 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:57.285 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:57.285 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:57.285 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:57.285 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:57.285 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:57.285 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:57.285 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:57.285 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:57.285 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:57.285 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:57.285 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:57.285 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:57.285 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:57.285 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:57.285 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:57.285 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:57.285 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:57.544 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:57.544 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:57.544 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:57.544 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:57.544 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:57.544 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:57.544 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:57.544 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:57.544 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:57.544 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:57.544 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:57.544 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:57.544 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:57.544 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:57.544 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:57.544 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:57.544 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:57.544 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:57.544 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:57.804 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:57.804 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:57.804 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:57.804 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:57.804 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:57.804 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:57.804 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:57.804 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:57.804 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:57.804 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:57.804 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:57.804 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:57.804 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:57.804 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:57.804 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:57.804 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:57.804 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:57.804 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:57.804 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:57.804 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:57.804 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:57.804 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:57.804 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:57.804 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:57.804 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:57.804 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:57.804 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:57.804 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:57.804 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:57.804 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:57.804 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:57.804 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:57.804 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:58.094 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:58.094 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:58.094 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:58.094 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:58.094 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:58.094 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:58.094 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:58.094 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:58.094 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:58.094 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:58.094 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:58.094 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:58.094 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:58.094 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:58.094 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:58.094 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:58.094 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:58.094 [144/203] Linking target lib/libxnvme.so 00:03:58.094 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:58.094 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:58.094 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:58.354 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:58.354 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:58.354 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:58.354 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:58.354 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:58.354 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:58.354 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:58.354 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:58.354 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:58.354 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:58.354 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:58.354 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:58.354 [160/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:58.354 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:58.354 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:58.354 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:58.354 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:58.612 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:58.612 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:58.612 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:58.612 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:58.612 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:58.612 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:58.612 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:58.612 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:58.612 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:58.612 [174/203] Linking static target lib/libxnvme.a 00:03:58.870 [175/203] Linking target tests/xnvme_tests_scc 00:03:58.870 [176/203] Linking target tests/xnvme_tests_buf 00:03:58.870 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:58.870 [178/203] Linking target tests/xnvme_tests_lblk 00:03:58.870 [179/203] Linking target tests/xnvme_tests_enum 00:03:58.870 [180/203] Linking target tests/xnvme_tests_znd_state 00:03:58.870 [181/203] Linking target tests/xnvme_tests_cli 00:03:58.870 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:58.870 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:03:58.870 [184/203] Linking target tests/xnvme_tests_znd_append 00:03:58.870 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:58.870 [186/203] Linking target tests/xnvme_tests_ioworker 00:03:58.870 [187/203] Linking target tests/xnvme_tests_kvs 00:03:58.870 [188/203] Linking target tests/xnvme_tests_map 00:03:58.870 [189/203] Linking target tools/xdd 00:03:58.870 [190/203] Linking target tools/xnvme_file 00:03:58.870 [191/203] Linking target examples/xnvme_dev 00:03:58.870 [192/203] Linking target tools/xnvme 00:03:58.870 [193/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:58.870 [194/203] Linking target tools/lblk 00:03:58.870 [195/203] Linking target tools/zoned 00:03:58.870 [196/203] Linking target examples/zoned_io_sync 00:03:58.870 [197/203] Linking target tools/kvs 00:03:58.870 [198/203] Linking target examples/xnvme_enum 00:03:58.870 [199/203] Linking target examples/xnvme_hello 00:03:58.870 [200/203] Linking target examples/xnvme_single_sync 00:03:58.870 [201/203] Linking target examples/xnvme_single_async 00:03:58.870 [202/203] Linking target examples/xnvme_io_async 00:03:58.870 [203/203] Linking target examples/zoned_io_async 00:03:58.870 INFO: autodetecting backend as ninja 00:03:58.870 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.870 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:16.954 CC lib/log/log.o 00:04:16.954 CC lib/log/log_flags.o 00:04:16.954 CC lib/log/log_deprecated.o 00:04:16.954 CC lib/ut/ut.o 00:04:16.954 CC lib/ut_mock/mock.o 00:04:16.954 LIB libspdk_ut.a 00:04:16.954 LIB libspdk_log.a 00:04:16.954 LIB libspdk_ut_mock.a 00:04:16.954 SO libspdk_ut.so.2.0 00:04:16.954 SO libspdk_log.so.7.0 00:04:16.954 SO libspdk_ut_mock.so.6.0 00:04:16.954 SYMLINK libspdk_ut.so 00:04:16.954 SYMLINK libspdk_ut_mock.so 00:04:16.954 SYMLINK libspdk_log.so 00:04:16.954 CC lib/dma/dma.o 00:04:16.954 CC lib/util/bit_array.o 00:04:16.954 CC lib/util/base64.o 00:04:16.954 CXX lib/trace_parser/trace.o 00:04:16.954 CC lib/util/crc32.o 00:04:16.954 CC lib/util/cpuset.o 00:04:16.954 CC lib/util/crc16.o 00:04:16.954 CC lib/util/crc32c.o 00:04:16.954 CC lib/ioat/ioat.o 00:04:16.954 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.954 CC lib/vfio_user/host/vfio_user.o 00:04:16.954 CC lib/util/crc32_ieee.o 00:04:16.954 CC lib/util/crc64.o 00:04:16.954 LIB libspdk_dma.a 00:04:16.954 CC lib/util/dif.o 00:04:16.954 CC lib/util/fd.o 00:04:16.954 CC lib/util/fd_group.o 00:04:16.954 SO libspdk_dma.so.4.0 00:04:16.954 CC lib/util/file.o 00:04:16.954 CC lib/util/hexlify.o 00:04:16.954 SYMLINK libspdk_dma.so 00:04:16.954 CC lib/util/iov.o 00:04:16.954 LIB libspdk_ioat.a 00:04:16.954 CC lib/util/math.o 00:04:16.954 CC lib/util/net.o 00:04:16.954 SO libspdk_ioat.so.7.0 00:04:16.954 LIB libspdk_vfio_user.a 00:04:16.954 SO libspdk_vfio_user.so.5.0 00:04:16.954 CC lib/util/pipe.o 00:04:16.954 CC lib/util/strerror_tls.o 00:04:16.954 SYMLINK libspdk_ioat.so 00:04:16.954 CC lib/util/string.o 00:04:16.954 SYMLINK libspdk_vfio_user.so 00:04:16.954 CC lib/util/uuid.o 00:04:16.954 CC lib/util/xor.o 00:04:16.954 CC lib/util/zipf.o 00:04:16.954 LIB libspdk_util.a 00:04:16.954 SO libspdk_util.so.10.0 00:04:16.954 LIB libspdk_trace_parser.a 00:04:16.954 SO libspdk_trace_parser.so.5.0 00:04:16.954 SYMLINK libspdk_util.so 00:04:16.954 SYMLINK libspdk_trace_parser.so 00:04:16.954 CC lib/rdma_provider/common.o 00:04:16.954 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:16.954 CC lib/vmd/vmd.o 00:04:16.954 CC lib/vmd/led.o 00:04:16.954 CC lib/rdma_utils/rdma_utils.o 00:04:16.954 CC lib/json/json_parse.o 00:04:16.954 CC lib/json/json_util.o 00:04:16.954 CC lib/idxd/idxd.o 00:04:16.954 CC lib/conf/conf.o 00:04:16.954 CC lib/env_dpdk/env.o 00:04:16.954 CC lib/env_dpdk/memory.o 00:04:16.954 CC lib/env_dpdk/pci.o 00:04:16.954 LIB libspdk_rdma_provider.a 00:04:16.954 SO libspdk_rdma_provider.so.6.0 00:04:16.954 LIB libspdk_conf.a 00:04:16.954 CC lib/json/json_write.o 00:04:16.954 CC lib/env_dpdk/init.o 00:04:16.954 SO libspdk_conf.so.6.0 00:04:16.954 LIB libspdk_rdma_utils.a 00:04:16.954 SYMLINK libspdk_rdma_provider.so 00:04:16.954 CC lib/idxd/idxd_user.o 00:04:16.954 SO libspdk_rdma_utils.so.1.0 00:04:16.954 SYMLINK libspdk_conf.so 00:04:16.954 CC lib/env_dpdk/threads.o 00:04:16.954 SYMLINK libspdk_rdma_utils.so 00:04:16.954 CC lib/env_dpdk/pci_ioat.o 00:04:16.954 CC lib/env_dpdk/pci_virtio.o 00:04:16.954 CC lib/env_dpdk/pci_vmd.o 00:04:16.954 LIB libspdk_json.a 00:04:16.954 CC lib/idxd/idxd_kernel.o 00:04:16.954 CC lib/env_dpdk/pci_idxd.o 00:04:16.954 SO libspdk_json.so.6.0 00:04:16.954 CC lib/env_dpdk/pci_event.o 00:04:16.954 CC lib/env_dpdk/sigbus_handler.o 00:04:16.954 CC lib/env_dpdk/pci_dpdk.o 00:04:16.954 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:17.213 SYMLINK libspdk_json.so 00:04:17.213 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:17.213 LIB libspdk_vmd.a 00:04:17.213 LIB libspdk_idxd.a 00:04:17.213 SO libspdk_vmd.so.6.0 00:04:17.213 SO libspdk_idxd.so.12.0 00:04:17.213 CC lib/jsonrpc/jsonrpc_server.o 00:04:17.213 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:17.213 CC lib/jsonrpc/jsonrpc_client.o 00:04:17.213 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.213 SYMLINK libspdk_vmd.so 00:04:17.213 SYMLINK libspdk_idxd.so 00:04:17.470 LIB libspdk_jsonrpc.a 00:04:17.729 SO libspdk_jsonrpc.so.6.0 00:04:17.729 SYMLINK libspdk_jsonrpc.so 00:04:17.988 CC lib/rpc/rpc.o 00:04:18.246 LIB libspdk_env_dpdk.a 00:04:18.246 LIB libspdk_rpc.a 00:04:18.246 SO libspdk_env_dpdk.so.15.0 00:04:18.504 SO libspdk_rpc.so.6.0 00:04:18.504 SYMLINK libspdk_rpc.so 00:04:18.504 SYMLINK libspdk_env_dpdk.so 00:04:18.763 CC lib/trace/trace.o 00:04:18.763 CC lib/trace/trace_rpc.o 00:04:18.763 CC lib/trace/trace_flags.o 00:04:18.763 CC lib/notify/notify.o 00:04:18.763 CC lib/notify/notify_rpc.o 00:04:18.763 CC lib/keyring/keyring.o 00:04:18.763 CC lib/keyring/keyring_rpc.o 00:04:19.022 LIB libspdk_notify.a 00:04:19.022 SO libspdk_notify.so.6.0 00:04:19.022 LIB libspdk_keyring.a 00:04:19.022 SYMLINK libspdk_notify.so 00:04:19.022 SO libspdk_keyring.so.1.0 00:04:19.281 LIB libspdk_trace.a 00:04:19.281 SYMLINK libspdk_keyring.so 00:04:19.281 SO libspdk_trace.so.10.0 00:04:19.281 SYMLINK libspdk_trace.so 00:04:19.848 CC lib/thread/thread.o 00:04:19.848 CC lib/thread/iobuf.o 00:04:19.848 CC lib/sock/sock.o 00:04:19.848 CC lib/sock/sock_rpc.o 00:04:20.415 LIB libspdk_sock.a 00:04:20.415 SO libspdk_sock.so.10.0 00:04:20.415 SYMLINK libspdk_sock.so 00:04:20.983 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:20.983 CC lib/nvme/nvme_ctrlr.o 00:04:20.983 CC lib/nvme/nvme_fabric.o 00:04:20.983 CC lib/nvme/nvme_ns_cmd.o 00:04:20.983 CC lib/nvme/nvme_ns.o 00:04:20.983 CC lib/nvme/nvme_pcie_common.o 00:04:20.983 CC lib/nvme/nvme_qpair.o 00:04:20.983 CC lib/nvme/nvme_pcie.o 00:04:20.983 CC lib/nvme/nvme.o 00:04:21.551 LIB libspdk_thread.a 00:04:21.551 CC lib/nvme/nvme_quirks.o 00:04:21.551 SO libspdk_thread.so.10.1 00:04:21.551 CC lib/nvme/nvme_transport.o 00:04:21.551 SYMLINK libspdk_thread.so 00:04:21.551 CC lib/nvme/nvme_discovery.o 00:04:21.551 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:21.811 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:21.811 CC lib/nvme/nvme_tcp.o 00:04:21.811 CC lib/accel/accel.o 00:04:22.070 CC lib/nvme/nvme_opal.o 00:04:22.070 CC lib/blob/blobstore.o 00:04:22.070 CC lib/blob/request.o 00:04:22.070 CC lib/blob/zeroes.o 00:04:22.330 CC lib/blob/blob_bs_dev.o 00:04:22.330 CC lib/accel/accel_rpc.o 00:04:22.330 CC lib/accel/accel_sw.o 00:04:22.589 CC lib/nvme/nvme_io_msg.o 00:04:22.589 CC lib/init/json_config.o 00:04:22.589 CC lib/nvme/nvme_poll_group.o 00:04:22.589 CC lib/nvme/nvme_zns.o 00:04:22.589 CC lib/virtio/virtio.o 00:04:22.589 CC lib/virtio/virtio_vhost_user.o 00:04:22.848 CC lib/init/subsystem.o 00:04:22.848 CC lib/virtio/virtio_vfio_user.o 00:04:22.848 CC lib/init/subsystem_rpc.o 00:04:23.107 CC lib/virtio/virtio_pci.o 00:04:23.107 CC lib/init/rpc.o 00:04:23.107 CC lib/nvme/nvme_stubs.o 00:04:23.107 CC lib/nvme/nvme_auth.o 00:04:23.107 CC lib/nvme/nvme_cuse.o 00:04:23.107 CC lib/nvme/nvme_rdma.o 00:04:23.107 LIB libspdk_accel.a 00:04:23.107 LIB libspdk_init.a 00:04:23.107 SO libspdk_accel.so.16.0 00:04:23.365 SO libspdk_init.so.5.0 00:04:23.366 SYMLINK libspdk_accel.so 00:04:23.366 LIB libspdk_virtio.a 00:04:23.366 SYMLINK libspdk_init.so 00:04:23.366 SO libspdk_virtio.so.7.0 00:04:23.366 SYMLINK libspdk_virtio.so 00:04:23.623 CC lib/bdev/bdev.o 00:04:23.623 CC lib/bdev/bdev_zone.o 00:04:23.623 CC lib/bdev/part.o 00:04:23.623 CC lib/bdev/bdev_rpc.o 00:04:23.623 CC lib/event/app.o 00:04:23.624 CC lib/bdev/scsi_nvme.o 00:04:23.882 CC lib/event/reactor.o 00:04:23.882 CC lib/event/log_rpc.o 00:04:23.882 CC lib/event/app_rpc.o 00:04:23.882 CC lib/event/scheduler_static.o 00:04:24.142 LIB libspdk_event.a 00:04:24.402 SO libspdk_event.so.14.0 00:04:24.402 SYMLINK libspdk_event.so 00:04:24.664 LIB libspdk_nvme.a 00:04:24.921 SO libspdk_nvme.so.13.1 00:04:25.180 SYMLINK libspdk_nvme.so 00:04:25.754 LIB libspdk_blob.a 00:04:25.754 SO libspdk_blob.so.11.0 00:04:26.014 SYMLINK libspdk_blob.so 00:04:26.272 CC lib/blobfs/blobfs.o 00:04:26.272 CC lib/lvol/lvol.o 00:04:26.272 CC lib/blobfs/tree.o 00:04:26.531 LIB libspdk_bdev.a 00:04:26.789 SO libspdk_bdev.so.16.0 00:04:26.789 SYMLINK libspdk_bdev.so 00:04:27.047 CC lib/nbd/nbd.o 00:04:27.047 CC lib/nbd/nbd_rpc.o 00:04:27.047 CC lib/ublk/ublk.o 00:04:27.047 CC lib/scsi/dev.o 00:04:27.047 CC lib/ublk/ublk_rpc.o 00:04:27.047 CC lib/scsi/lun.o 00:04:27.047 CC lib/ftl/ftl_core.o 00:04:27.047 CC lib/nvmf/ctrlr.o 00:04:27.306 LIB libspdk_blobfs.a 00:04:27.306 CC lib/nvmf/ctrlr_discovery.o 00:04:27.306 LIB libspdk_lvol.a 00:04:27.306 SO libspdk_blobfs.so.10.0 00:04:27.306 CC lib/scsi/port.o 00:04:27.306 SO libspdk_lvol.so.10.0 00:04:27.306 SYMLINK libspdk_blobfs.so 00:04:27.306 CC lib/ftl/ftl_init.o 00:04:27.306 CC lib/nvmf/ctrlr_bdev.o 00:04:27.306 SYMLINK libspdk_lvol.so 00:04:27.306 CC lib/ftl/ftl_layout.o 00:04:27.306 CC lib/ftl/ftl_debug.o 00:04:27.603 CC lib/scsi/scsi.o 00:04:27.603 LIB libspdk_nbd.a 00:04:27.603 SO libspdk_nbd.so.7.0 00:04:27.603 CC lib/ftl/ftl_io.o 00:04:27.603 CC lib/nvmf/subsystem.o 00:04:27.603 CC lib/scsi/scsi_bdev.o 00:04:27.603 SYMLINK libspdk_nbd.so 00:04:27.603 CC lib/scsi/scsi_pr.o 00:04:27.603 CC lib/ftl/ftl_sb.o 00:04:27.603 CC lib/ftl/ftl_l2p.o 00:04:27.860 LIB libspdk_ublk.a 00:04:27.860 CC lib/nvmf/nvmf.o 00:04:27.860 SO libspdk_ublk.so.3.0 00:04:27.860 CC lib/ftl/ftl_l2p_flat.o 00:04:27.860 CC lib/ftl/ftl_nv_cache.o 00:04:27.860 SYMLINK libspdk_ublk.so 00:04:27.860 CC lib/ftl/ftl_band.o 00:04:27.860 CC lib/nvmf/nvmf_rpc.o 00:04:27.860 CC lib/nvmf/transport.o 00:04:28.116 CC lib/nvmf/tcp.o 00:04:28.116 CC lib/scsi/scsi_rpc.o 00:04:28.116 CC lib/nvmf/stubs.o 00:04:28.117 CC lib/scsi/task.o 00:04:28.374 CC lib/ftl/ftl_band_ops.o 00:04:28.374 LIB libspdk_scsi.a 00:04:28.632 SO libspdk_scsi.so.9.0 00:04:28.632 CC lib/nvmf/mdns_server.o 00:04:28.632 CC lib/nvmf/rdma.o 00:04:28.632 SYMLINK libspdk_scsi.so 00:04:28.632 CC lib/nvmf/auth.o 00:04:28.890 CC lib/ftl/ftl_writer.o 00:04:28.890 CC lib/iscsi/conn.o 00:04:28.890 CC lib/ftl/ftl_rq.o 00:04:28.890 CC lib/vhost/vhost.o 00:04:28.890 CC lib/vhost/vhost_rpc.o 00:04:29.148 CC lib/iscsi/init_grp.o 00:04:29.148 CC lib/iscsi/iscsi.o 00:04:29.148 CC lib/iscsi/md5.o 00:04:29.148 CC lib/ftl/ftl_reloc.o 00:04:29.148 CC lib/iscsi/param.o 00:04:29.406 CC lib/iscsi/portal_grp.o 00:04:29.406 CC lib/ftl/ftl_l2p_cache.o 00:04:29.406 CC lib/ftl/ftl_p2l.o 00:04:29.663 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.663 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.663 CC lib/vhost/vhost_scsi.o 00:04:29.663 CC lib/iscsi/tgt_node.o 00:04:29.663 CC lib/vhost/vhost_blk.o 00:04:29.920 CC lib/vhost/rte_vhost_user.o 00:04:29.920 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.920 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.920 CC lib/iscsi/iscsi_subsystem.o 00:04:29.920 CC lib/iscsi/iscsi_rpc.o 00:04:29.920 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:30.177 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:30.177 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:30.177 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:30.435 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:30.435 CC lib/iscsi/task.o 00:04:30.435 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:30.435 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.435 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.436 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.436 CC lib/ftl/utils/ftl_conf.o 00:04:30.694 CC lib/ftl/utils/ftl_md.o 00:04:30.694 CC lib/ftl/utils/ftl_mempool.o 00:04:30.694 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.694 CC lib/ftl/utils/ftl_property.o 00:04:30.694 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.694 LIB libspdk_iscsi.a 00:04:30.694 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.694 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.694 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.694 SO libspdk_iscsi.so.8.0 00:04:30.952 LIB libspdk_vhost.a 00:04:30.952 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.952 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.952 SO libspdk_vhost.so.8.0 00:04:30.952 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.952 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.952 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.952 SYMLINK libspdk_iscsi.so 00:04:30.952 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.952 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.952 SYMLINK libspdk_vhost.so 00:04:30.952 CC lib/ftl/base/ftl_base_dev.o 00:04:30.952 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.952 CC lib/ftl/ftl_trace.o 00:04:31.209 LIB libspdk_nvmf.a 00:04:31.209 SO libspdk_nvmf.so.19.0 00:04:31.468 LIB libspdk_ftl.a 00:04:31.468 SYMLINK libspdk_nvmf.so 00:04:31.727 SO libspdk_ftl.so.9.0 00:04:31.985 SYMLINK libspdk_ftl.so 00:04:32.595 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.595 CC module/sock/posix/posix.o 00:04:32.595 CC module/accel/error/accel_error.o 00:04:32.595 CC module/accel/iaa/accel_iaa.o 00:04:32.595 CC module/keyring/linux/keyring.o 00:04:32.595 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.595 CC module/accel/dsa/accel_dsa.o 00:04:32.595 CC module/keyring/file/keyring.o 00:04:32.595 CC module/accel/ioat/accel_ioat.o 00:04:32.595 CC module/blob/bdev/blob_bdev.o 00:04:32.595 LIB libspdk_env_dpdk_rpc.a 00:04:32.595 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.595 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.595 CC module/keyring/linux/keyring_rpc.o 00:04:32.595 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.595 CC module/keyring/file/keyring_rpc.o 00:04:32.595 CC module/accel/error/accel_error_rpc.o 00:04:32.595 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.914 LIB libspdk_scheduler_dynamic.a 00:04:32.914 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.914 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.914 LIB libspdk_accel_iaa.a 00:04:32.914 LIB libspdk_blob_bdev.a 00:04:32.914 LIB libspdk_keyring_file.a 00:04:32.914 LIB libspdk_accel_error.a 00:04:32.914 SO libspdk_accel_iaa.so.3.0 00:04:32.914 LIB libspdk_accel_ioat.a 00:04:32.914 LIB libspdk_keyring_linux.a 00:04:32.914 SO libspdk_blob_bdev.so.11.0 00:04:32.914 SO libspdk_keyring_file.so.1.0 00:04:32.914 SO libspdk_accel_error.so.2.0 00:04:32.914 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.914 SO libspdk_accel_ioat.so.6.0 00:04:32.914 SO libspdk_keyring_linux.so.1.0 00:04:32.914 SYMLINK libspdk_blob_bdev.so 00:04:32.914 SYMLINK libspdk_accel_error.so 00:04:32.914 SYMLINK libspdk_accel_iaa.so 00:04:32.914 SYMLINK libspdk_keyring_file.so 00:04:32.914 LIB libspdk_accel_dsa.a 00:04:32.914 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.914 SYMLINK libspdk_accel_ioat.so 00:04:32.914 SYMLINK libspdk_keyring_linux.so 00:04:32.914 SO libspdk_accel_dsa.so.5.0 00:04:32.914 SYMLINK libspdk_accel_dsa.so 00:04:33.171 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.171 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.171 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.171 CC module/bdev/gpt/gpt.o 00:04:33.171 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.171 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.171 CC module/bdev/delay/vbdev_delay.o 00:04:33.171 CC module/bdev/malloc/bdev_malloc.o 00:04:33.171 CC module/bdev/error/vbdev_error.o 00:04:33.171 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.171 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.171 CC module/bdev/null/bdev_null.o 00:04:33.171 LIB libspdk_scheduler_gscheduler.a 00:04:33.171 LIB libspdk_sock_posix.a 00:04:33.171 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.171 SO libspdk_sock_posix.so.6.0 00:04:33.429 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.429 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.429 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.429 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.429 SYMLINK libspdk_sock_posix.so 00:04:33.429 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.429 CC module/bdev/null/bdev_null_rpc.o 00:04:33.429 LIB libspdk_blobfs_bdev.a 00:04:33.429 CC module/bdev/nvme/bdev_nvme.o 00:04:33.429 SO libspdk_blobfs_bdev.so.6.0 00:04:33.429 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.690 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.690 LIB libspdk_bdev_delay.a 00:04:33.690 LIB libspdk_bdev_error.a 00:04:33.690 LIB libspdk_bdev_gpt.a 00:04:33.690 SO libspdk_bdev_delay.so.6.0 00:04:33.690 SYMLINK libspdk_blobfs_bdev.so 00:04:33.690 SO libspdk_bdev_error.so.6.0 00:04:33.690 LIB libspdk_bdev_null.a 00:04:33.690 SO libspdk_bdev_gpt.so.6.0 00:04:33.690 SYMLINK libspdk_bdev_delay.so 00:04:33.690 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.690 SO libspdk_bdev_null.so.6.0 00:04:33.690 SYMLINK libspdk_bdev_error.so 00:04:33.690 SYMLINK libspdk_bdev_gpt.so 00:04:33.690 LIB libspdk_bdev_malloc.a 00:04:33.690 SYMLINK libspdk_bdev_null.so 00:04:33.690 LIB libspdk_bdev_lvol.a 00:04:33.690 SO libspdk_bdev_malloc.so.6.0 00:04:33.949 SO libspdk_bdev_lvol.so.6.0 00:04:33.949 SYMLINK libspdk_bdev_malloc.so 00:04:33.949 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.949 CC module/bdev/raid/bdev_raid.o 00:04:33.949 SYMLINK libspdk_bdev_lvol.so 00:04:33.949 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:33.949 CC module/bdev/xnvme/bdev_xnvme.o 00:04:33.949 CC module/bdev/split/vbdev_split.o 00:04:33.949 CC module/bdev/aio/bdev_aio.o 00:04:34.208 CC module/bdev/ftl/bdev_ftl.o 00:04:34.208 LIB libspdk_bdev_passthru.a 00:04:34.208 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.208 SO libspdk_bdev_passthru.so.6.0 00:04:34.208 SYMLINK libspdk_bdev_passthru.so 00:04:34.208 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.208 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.208 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:34.465 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.465 LIB libspdk_bdev_split.a 00:04:34.465 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.465 CC module/bdev/nvme/nvme_rpc.o 00:04:34.465 SO libspdk_bdev_split.so.6.0 00:04:34.465 LIB libspdk_bdev_xnvme.a 00:04:34.465 LIB libspdk_bdev_ftl.a 00:04:34.465 SO libspdk_bdev_xnvme.so.3.0 00:04:34.465 SYMLINK libspdk_bdev_split.so 00:04:34.466 SO libspdk_bdev_ftl.so.6.0 00:04:34.466 CC module/bdev/nvme/bdev_mdns_client.o 00:04:34.466 LIB libspdk_bdev_zone_block.a 00:04:34.466 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.466 SO libspdk_bdev_zone_block.so.6.0 00:04:34.466 LIB libspdk_bdev_aio.a 00:04:34.466 SYMLINK libspdk_bdev_xnvme.so 00:04:34.724 SYMLINK libspdk_bdev_ftl.so 00:04:34.724 CC module/bdev/nvme/vbdev_opal.o 00:04:34.724 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.724 SO libspdk_bdev_aio.so.6.0 00:04:34.724 SYMLINK libspdk_bdev_zone_block.so 00:04:34.724 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.724 SYMLINK libspdk_bdev_aio.so 00:04:34.724 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.724 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.724 CC module/bdev/raid/raid0.o 00:04:34.724 CC module/bdev/raid/raid1.o 00:04:34.724 LIB libspdk_bdev_iscsi.a 00:04:34.724 SO libspdk_bdev_iscsi.so.6.0 00:04:34.724 SYMLINK libspdk_bdev_iscsi.so 00:04:34.724 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.981 CC module/bdev/raid/concat.o 00:04:34.981 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.981 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:35.239 LIB libspdk_bdev_raid.a 00:04:35.239 SO libspdk_bdev_raid.so.6.0 00:04:35.239 LIB libspdk_bdev_virtio.a 00:04:35.239 SO libspdk_bdev_virtio.so.6.0 00:04:35.497 SYMLINK libspdk_bdev_raid.so 00:04:35.497 SYMLINK libspdk_bdev_virtio.so 00:04:36.062 LIB libspdk_bdev_nvme.a 00:04:36.321 SO libspdk_bdev_nvme.so.7.0 00:04:36.321 SYMLINK libspdk_bdev_nvme.so 00:04:36.889 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.889 CC module/event/subsystems/iobuf/iobuf.o 00:04:36.889 CC module/event/subsystems/scheduler/scheduler.o 00:04:36.889 CC module/event/subsystems/keyring/keyring.o 00:04:37.200 CC module/event/subsystems/sock/sock.o 00:04:37.200 CC module/event/subsystems/vmd/vmd.o 00:04:37.200 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.200 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.200 LIB libspdk_event_scheduler.a 00:04:37.200 LIB libspdk_event_keyring.a 00:04:37.200 LIB libspdk_event_iobuf.a 00:04:37.200 LIB libspdk_event_vhost_blk.a 00:04:37.200 LIB libspdk_event_vmd.a 00:04:37.200 SO libspdk_event_scheduler.so.4.0 00:04:37.200 LIB libspdk_event_sock.a 00:04:37.200 SO libspdk_event_keyring.so.1.0 00:04:37.200 SO libspdk_event_iobuf.so.3.0 00:04:37.200 SO libspdk_event_vhost_blk.so.3.0 00:04:37.200 SO libspdk_event_vmd.so.6.0 00:04:37.200 SO libspdk_event_sock.so.5.0 00:04:37.200 SYMLINK libspdk_event_scheduler.so 00:04:37.200 SYMLINK libspdk_event_keyring.so 00:04:37.200 SYMLINK libspdk_event_iobuf.so 00:04:37.200 SYMLINK libspdk_event_vhost_blk.so 00:04:37.200 SYMLINK libspdk_event_vmd.so 00:04:37.200 SYMLINK libspdk_event_sock.so 00:04:37.785 CC module/event/subsystems/accel/accel.o 00:04:37.785 LIB libspdk_event_accel.a 00:04:37.785 SO libspdk_event_accel.so.6.0 00:04:38.044 SYMLINK libspdk_event_accel.so 00:04:38.303 CC module/event/subsystems/bdev/bdev.o 00:04:38.562 LIB libspdk_event_bdev.a 00:04:38.562 SO libspdk_event_bdev.so.6.0 00:04:38.562 SYMLINK libspdk_event_bdev.so 00:04:38.821 CC module/event/subsystems/scsi/scsi.o 00:04:38.821 CC module/event/subsystems/nbd/nbd.o 00:04:38.821 CC module/event/subsystems/ublk/ublk.o 00:04:38.821 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.821 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.079 LIB libspdk_event_scsi.a 00:04:39.079 SO libspdk_event_scsi.so.6.0 00:04:39.079 LIB libspdk_event_nbd.a 00:04:39.079 SO libspdk_event_nbd.so.6.0 00:04:39.079 SYMLINK libspdk_event_scsi.so 00:04:39.079 LIB libspdk_event_ublk.a 00:04:39.337 SO libspdk_event_ublk.so.3.0 00:04:39.337 SYMLINK libspdk_event_nbd.so 00:04:39.337 LIB libspdk_event_nvmf.a 00:04:39.337 SYMLINK libspdk_event_ublk.so 00:04:39.337 SO libspdk_event_nvmf.so.6.0 00:04:39.337 SYMLINK libspdk_event_nvmf.so 00:04:39.337 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.337 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.596 LIB libspdk_event_iscsi.a 00:04:39.596 LIB libspdk_event_vhost_scsi.a 00:04:39.596 SO libspdk_event_vhost_scsi.so.3.0 00:04:39.596 SO libspdk_event_iscsi.so.6.0 00:04:39.854 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.854 SYMLINK libspdk_event_iscsi.so 00:04:39.854 SO libspdk.so.6.0 00:04:39.854 SYMLINK libspdk.so 00:04:40.421 CC app/trace_record/trace_record.o 00:04:40.421 CXX app/trace/trace.o 00:04:40.421 CC app/spdk_lspci/spdk_lspci.o 00:04:40.421 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.421 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.421 CC app/nvmf_tgt/nvmf_main.o 00:04:40.421 CC examples/util/zipf/zipf.o 00:04:40.421 CC app/spdk_tgt/spdk_tgt.o 00:04:40.421 CC examples/ioat/perf/perf.o 00:04:40.421 CC test/thread/poller_perf/poller_perf.o 00:04:40.421 LINK spdk_lspci 00:04:40.679 LINK interrupt_tgt 00:04:40.679 LINK poller_perf 00:04:40.679 LINK spdk_trace_record 00:04:40.679 LINK ioat_perf 00:04:40.679 LINK zipf 00:04:40.679 LINK nvmf_tgt 00:04:40.679 LINK iscsi_tgt 00:04:40.679 LINK spdk_trace 00:04:40.679 LINK spdk_tgt 00:04:41.008 CC app/spdk_nvme_perf/perf.o 00:04:41.008 CC examples/ioat/verify/verify.o 00:04:41.008 CC app/spdk_nvme_identify/identify.o 00:04:41.008 CC test/app/bdev_svc/bdev_svc.o 00:04:41.008 CC test/app/histogram_perf/histogram_perf.o 00:04:41.008 CC test/dma/test_dma/test_dma.o 00:04:41.008 CC test/app/jsoncat/jsoncat.o 00:04:41.008 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.008 CC test/app/stub/stub.o 00:04:41.266 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.266 LINK histogram_perf 00:04:41.266 LINK verify 00:04:41.266 LINK jsoncat 00:04:41.266 LINK bdev_svc 00:04:41.266 LINK spdk_nvme_discover 00:04:41.526 CC app/spdk_top/spdk_top.o 00:04:41.526 LINK stub 00:04:41.526 LINK test_dma 00:04:41.526 CC examples/thread/thread/thread_ex.o 00:04:41.526 LINK nvme_fuzz 00:04:41.526 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.785 CC examples/sock/hello_world/hello_sock.o 00:04:41.785 TEST_HEADER include/spdk/accel.h 00:04:41.785 TEST_HEADER include/spdk/accel_module.h 00:04:41.785 TEST_HEADER include/spdk/assert.h 00:04:41.785 TEST_HEADER include/spdk/barrier.h 00:04:41.785 TEST_HEADER include/spdk/base64.h 00:04:41.785 TEST_HEADER include/spdk/bdev.h 00:04:41.785 TEST_HEADER include/spdk/bdev_module.h 00:04:41.785 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.785 TEST_HEADER include/spdk/bit_array.h 00:04:41.785 TEST_HEADER include/spdk/bit_pool.h 00:04:41.785 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.785 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.785 TEST_HEADER include/spdk/blobfs.h 00:04:41.785 TEST_HEADER include/spdk/blob.h 00:04:41.785 TEST_HEADER include/spdk/conf.h 00:04:41.785 TEST_HEADER include/spdk/config.h 00:04:41.785 TEST_HEADER include/spdk/cpuset.h 00:04:41.785 TEST_HEADER include/spdk/crc16.h 00:04:41.785 TEST_HEADER include/spdk/crc32.h 00:04:41.785 TEST_HEADER include/spdk/crc64.h 00:04:41.785 TEST_HEADER include/spdk/dif.h 00:04:41.785 TEST_HEADER include/spdk/dma.h 00:04:41.785 TEST_HEADER include/spdk/endian.h 00:04:41.785 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.785 TEST_HEADER include/spdk/env.h 00:04:41.785 TEST_HEADER include/spdk/event.h 00:04:41.785 TEST_HEADER include/spdk/fd_group.h 00:04:41.785 TEST_HEADER include/spdk/fd.h 00:04:41.785 TEST_HEADER include/spdk/file.h 00:04:41.785 TEST_HEADER include/spdk/ftl.h 00:04:41.785 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.785 TEST_HEADER include/spdk/hexlify.h 00:04:41.785 TEST_HEADER include/spdk/histogram_data.h 00:04:41.785 TEST_HEADER include/spdk/idxd.h 00:04:41.785 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.785 TEST_HEADER include/spdk/init.h 00:04:41.785 TEST_HEADER include/spdk/ioat.h 00:04:41.785 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.785 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.785 TEST_HEADER include/spdk/json.h 00:04:41.785 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.785 TEST_HEADER include/spdk/keyring.h 00:04:41.785 TEST_HEADER include/spdk/keyring_module.h 00:04:41.785 TEST_HEADER include/spdk/likely.h 00:04:41.785 TEST_HEADER include/spdk/log.h 00:04:41.785 TEST_HEADER include/spdk/lvol.h 00:04:41.785 TEST_HEADER include/spdk/memory.h 00:04:41.785 TEST_HEADER include/spdk/mmio.h 00:04:41.785 TEST_HEADER include/spdk/nbd.h 00:04:41.785 TEST_HEADER include/spdk/net.h 00:04:41.786 TEST_HEADER include/spdk/notify.h 00:04:41.786 TEST_HEADER include/spdk/nvme.h 00:04:41.786 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.786 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.786 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.786 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.786 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.786 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.786 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.786 TEST_HEADER include/spdk/nvmf.h 00:04:41.786 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.786 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.786 TEST_HEADER include/spdk/opal.h 00:04:41.786 TEST_HEADER include/spdk/opal_spec.h 00:04:41.786 TEST_HEADER include/spdk/pci_ids.h 00:04:41.786 TEST_HEADER include/spdk/pipe.h 00:04:41.786 TEST_HEADER include/spdk/queue.h 00:04:41.786 TEST_HEADER include/spdk/reduce.h 00:04:41.786 TEST_HEADER include/spdk/rpc.h 00:04:41.786 TEST_HEADER include/spdk/scheduler.h 00:04:41.786 TEST_HEADER include/spdk/scsi.h 00:04:41.786 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.786 TEST_HEADER include/spdk/sock.h 00:04:41.786 TEST_HEADER include/spdk/stdinc.h 00:04:41.786 TEST_HEADER include/spdk/string.h 00:04:41.786 TEST_HEADER include/spdk/thread.h 00:04:41.786 TEST_HEADER include/spdk/trace.h 00:04:41.786 TEST_HEADER include/spdk/trace_parser.h 00:04:41.786 TEST_HEADER include/spdk/tree.h 00:04:41.786 TEST_HEADER include/spdk/ublk.h 00:04:41.786 TEST_HEADER include/spdk/util.h 00:04:41.786 TEST_HEADER include/spdk/uuid.h 00:04:41.786 TEST_HEADER include/spdk/version.h 00:04:41.786 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.786 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.786 TEST_HEADER include/spdk/vhost.h 00:04:42.044 TEST_HEADER include/spdk/vmd.h 00:04:42.044 TEST_HEADER include/spdk/xor.h 00:04:42.044 TEST_HEADER include/spdk/zipf.h 00:04:42.044 CXX test/cpp_headers/accel.o 00:04:42.044 LINK thread 00:04:42.044 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.044 CC app/vhost/vhost.o 00:04:42.044 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.044 LINK hello_sock 00:04:42.044 LINK spdk_nvme_identify 00:04:42.044 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.303 CXX test/cpp_headers/accel_module.o 00:04:42.303 LINK vhost 00:04:42.561 LINK spdk_nvme_perf 00:04:42.561 CXX test/cpp_headers/assert.o 00:04:42.561 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.561 CC test/env/vtophys/vtophys.o 00:04:42.561 CC test/event/event_perf/event_perf.o 00:04:42.561 LINK spdk_top 00:04:42.819 LINK mem_callbacks 00:04:42.819 LINK lsvmd 00:04:42.819 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:42.819 CXX test/cpp_headers/barrier.o 00:04:42.819 LINK vhost_fuzz 00:04:42.819 CC test/env/memory/memory_ut.o 00:04:42.819 LINK vtophys 00:04:42.819 LINK event_perf 00:04:43.079 LINK env_dpdk_post_init 00:04:43.079 CXX test/cpp_headers/base64.o 00:04:43.079 CC app/spdk_dd/spdk_dd.o 00:04:43.079 CC examples/vmd/led/led.o 00:04:43.079 CC test/env/pci/pci_ut.o 00:04:43.079 CC examples/idxd/perf/perf.o 00:04:43.079 CC test/event/reactor/reactor.o 00:04:43.338 CXX test/cpp_headers/bdev.o 00:04:43.338 LINK led 00:04:43.338 CC test/event/reactor_perf/reactor_perf.o 00:04:43.338 CC test/event/app_repeat/app_repeat.o 00:04:43.338 LINK reactor 00:04:43.339 CXX test/cpp_headers/bdev_module.o 00:04:43.339 LINK spdk_dd 00:04:43.598 LINK reactor_perf 00:04:43.598 LINK app_repeat 00:04:43.598 LINK idxd_perf 00:04:43.598 LINK pci_ut 00:04:43.598 CXX test/cpp_headers/bdev_zone.o 00:04:43.598 CC test/event/scheduler/scheduler.o 00:04:43.858 CC app/fio/nvme/fio_plugin.o 00:04:43.858 CC app/fio/bdev/fio_plugin.o 00:04:43.858 CXX test/cpp_headers/bit_array.o 00:04:43.858 CC test/nvme/aer/aer.o 00:04:43.858 LINK iscsi_fuzz 00:04:43.858 CC examples/accel/perf/accel_perf.o 00:04:44.117 LINK scheduler 00:04:44.117 CC examples/blob/cli/blobcli.o 00:04:44.117 CC examples/blob/hello_world/hello_blob.o 00:04:44.117 CXX test/cpp_headers/bit_pool.o 00:04:44.117 LINK memory_ut 00:04:44.377 LINK aer 00:04:44.377 CXX test/cpp_headers/blob_bdev.o 00:04:44.377 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.377 LINK hello_blob 00:04:44.377 LINK spdk_bdev 00:04:44.377 CXX test/cpp_headers/blobfs.o 00:04:44.637 LINK spdk_nvme 00:04:44.637 LINK accel_perf 00:04:44.637 CC test/rpc_client/rpc_client_test.o 00:04:44.637 CC examples/nvme/hello_world/hello_world.o 00:04:44.637 CXX test/cpp_headers/blob.o 00:04:44.637 LINK blobcli 00:04:44.637 CC test/nvme/reset/reset.o 00:04:44.637 CC test/nvme/sgl/sgl.o 00:04:44.896 CC test/accel/dif/dif.o 00:04:44.896 LINK rpc_client_test 00:04:44.896 CC test/nvme/e2edp/nvme_dp.o 00:04:44.896 CXX test/cpp_headers/conf.o 00:04:44.896 CC examples/nvme/reconnect/reconnect.o 00:04:44.896 LINK hello_world 00:04:44.896 CC test/blobfs/mkfs/mkfs.o 00:04:44.896 LINK reset 00:04:44.896 CXX test/cpp_headers/config.o 00:04:45.155 LINK sgl 00:04:45.155 CXX test/cpp_headers/cpuset.o 00:04:45.155 CC examples/nvme/arbitration/arbitration.o 00:04:45.155 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:45.155 LINK nvme_dp 00:04:45.155 LINK mkfs 00:04:45.155 CC examples/nvme/hotplug/hotplug.o 00:04:45.155 CXX test/cpp_headers/crc16.o 00:04:45.155 LINK reconnect 00:04:45.414 LINK dif 00:04:45.414 CC test/nvme/overhead/overhead.o 00:04:45.414 CXX test/cpp_headers/crc32.o 00:04:45.414 CC test/nvme/err_injection/err_injection.o 00:04:45.414 LINK arbitration 00:04:45.414 LINK hotplug 00:04:45.414 CC test/lvol/esnap/esnap.o 00:04:45.414 CXX test/cpp_headers/crc64.o 00:04:45.414 CC test/nvme/startup/startup.o 00:04:45.680 CC test/nvme/reserve/reserve.o 00:04:45.680 LINK err_injection 00:04:45.680 CC test/nvme/simple_copy/simple_copy.o 00:04:45.680 LINK overhead 00:04:45.680 CXX test/cpp_headers/dif.o 00:04:45.680 LINK startup 00:04:45.680 LINK nvme_manage 00:04:45.680 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:45.680 CC examples/nvme/abort/abort.o 00:04:45.939 CXX test/cpp_headers/dma.o 00:04:45.939 CXX test/cpp_headers/endian.o 00:04:45.939 CXX test/cpp_headers/env_dpdk.o 00:04:45.939 LINK simple_copy 00:04:45.939 LINK cmb_copy 00:04:45.939 LINK reserve 00:04:45.939 CXX test/cpp_headers/env.o 00:04:46.198 CXX test/cpp_headers/event.o 00:04:46.198 CC examples/bdev/hello_world/hello_bdev.o 00:04:46.198 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.198 LINK abort 00:04:46.198 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.198 CC test/nvme/connect_stress/connect_stress.o 00:04:46.198 CXX test/cpp_headers/fd_group.o 00:04:46.198 CC test/nvme/boot_partition/boot_partition.o 00:04:46.457 CC test/nvme/compliance/nvme_compliance.o 00:04:46.457 CC test/bdev/bdevio/bdevio.o 00:04:46.457 LINK hello_bdev 00:04:46.457 LINK pmr_persistence 00:04:46.457 LINK connect_stress 00:04:46.457 CXX test/cpp_headers/fd.o 00:04:46.457 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.457 LINK boot_partition 00:04:46.716 CXX test/cpp_headers/file.o 00:04:46.716 CXX test/cpp_headers/ftl.o 00:04:46.716 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.716 LINK fused_ordering 00:04:46.975 CXX test/cpp_headers/gpt_spec.o 00:04:46.975 LINK bdevio 00:04:46.975 CC test/nvme/fdp/fdp.o 00:04:46.975 CXX test/cpp_headers/hexlify.o 00:04:46.975 CC test/nvme/cuse/cuse.o 00:04:46.975 CXX test/cpp_headers/histogram_data.o 00:04:46.975 CXX test/cpp_headers/idxd.o 00:04:46.975 LINK bdevperf 00:04:46.975 LINK nvme_compliance 00:04:47.234 CXX test/cpp_headers/idxd_spec.o 00:04:47.234 CXX test/cpp_headers/init.o 00:04:47.234 LINK doorbell_aers 00:04:47.234 CXX test/cpp_headers/ioat.o 00:04:47.234 CXX test/cpp_headers/ioat_spec.o 00:04:47.234 LINK fdp 00:04:47.234 CXX test/cpp_headers/iscsi_spec.o 00:04:47.234 CXX test/cpp_headers/json.o 00:04:47.234 CXX test/cpp_headers/jsonrpc.o 00:04:47.234 CXX test/cpp_headers/keyring.o 00:04:47.492 CXX test/cpp_headers/likely.o 00:04:47.492 CXX test/cpp_headers/keyring_module.o 00:04:47.492 CXX test/cpp_headers/lvol.o 00:04:47.492 CXX test/cpp_headers/log.o 00:04:47.493 CXX test/cpp_headers/memory.o 00:04:47.493 CXX test/cpp_headers/mmio.o 00:04:47.493 CXX test/cpp_headers/nbd.o 00:04:47.493 CC examples/nvmf/nvmf/nvmf.o 00:04:47.493 CXX test/cpp_headers/net.o 00:04:47.493 CXX test/cpp_headers/notify.o 00:04:47.493 CXX test/cpp_headers/nvme.o 00:04:47.753 CXX test/cpp_headers/nvme_intel.o 00:04:47.753 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.753 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.753 CXX test/cpp_headers/nvme_spec.o 00:04:47.753 CXX test/cpp_headers/nvme_zns.o 00:04:47.753 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.753 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.753 CXX test/cpp_headers/nvmf.o 00:04:47.753 CXX test/cpp_headers/nvmf_spec.o 00:04:48.012 LINK nvmf 00:04:48.012 CXX test/cpp_headers/nvmf_transport.o 00:04:48.012 CXX test/cpp_headers/opal.o 00:04:48.012 CXX test/cpp_headers/opal_spec.o 00:04:48.012 CXX test/cpp_headers/pci_ids.o 00:04:48.013 CXX test/cpp_headers/pipe.o 00:04:48.013 CXX test/cpp_headers/queue.o 00:04:48.013 CXX test/cpp_headers/reduce.o 00:04:48.013 CXX test/cpp_headers/rpc.o 00:04:48.013 CXX test/cpp_headers/scheduler.o 00:04:48.013 CXX test/cpp_headers/scsi.o 00:04:48.013 CXX test/cpp_headers/scsi_spec.o 00:04:48.272 CXX test/cpp_headers/sock.o 00:04:48.272 CXX test/cpp_headers/stdinc.o 00:04:48.272 CXX test/cpp_headers/string.o 00:04:48.272 CXX test/cpp_headers/thread.o 00:04:48.272 CXX test/cpp_headers/trace.o 00:04:48.272 CXX test/cpp_headers/trace_parser.o 00:04:48.272 CXX test/cpp_headers/tree.o 00:04:48.272 CXX test/cpp_headers/ublk.o 00:04:48.272 CXX test/cpp_headers/util.o 00:04:48.272 CXX test/cpp_headers/uuid.o 00:04:48.272 CXX test/cpp_headers/version.o 00:04:48.272 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.272 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.595 CXX test/cpp_headers/vhost.o 00:04:48.595 CXX test/cpp_headers/vmd.o 00:04:48.595 CXX test/cpp_headers/xor.o 00:04:48.595 CXX test/cpp_headers/zipf.o 00:04:48.854 LINK cuse 00:04:52.142 LINK esnap 00:04:52.400 00:04:52.400 real 0m58.898s 00:04:52.400 user 4m58.062s 00:04:52.400 sys 1m20.483s 00:04:52.400 09:30:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:52.400 09:30:30 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.400 ************************************ 00:04:52.400 END TEST make 00:04:52.400 ************************************ 00:04:52.400 09:30:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.400 09:30:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.400 09:30:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.400 09:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.400 09:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.400 09:30:30 -- pm/common@44 -- $ pid=5923 00:04:52.400 09:30:30 -- pm/common@50 -- $ kill -TERM 5923 00:04:52.400 09:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.400 09:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.400 09:30:30 -- pm/common@44 -- $ pid=5925 00:04:52.400 09:30:30 -- pm/common@50 -- $ kill -TERM 5925 00:04:52.659 09:30:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.659 09:30:30 -- nvmf/common.sh@7 -- # uname -s 00:04:52.659 09:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.659 09:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.659 09:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.659 09:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.659 09:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.659 09:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.659 09:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.659 09:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.659 09:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.659 09:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.659 09:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a49f168b-eb54-4929-b45f-50a5185dc78e 00:04:52.659 09:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a49f168b-eb54-4929-b45f-50a5185dc78e 00:04:52.659 09:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.659 09:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.659 09:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.659 09:30:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.659 09:30:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.659 09:30:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.659 09:30:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.659 09:30:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.659 09:30:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.660 09:30:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.660 09:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.660 09:30:30 -- paths/export.sh@5 -- # export PATH 00:04:52.660 09:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.660 09:30:30 -- nvmf/common.sh@47 -- # : 0 00:04:52.660 09:30:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:52.660 09:30:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:52.660 09:30:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.660 09:30:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.660 09:30:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.660 09:30:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:52.660 09:30:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:52.660 09:30:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:52.660 09:30:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.660 09:30:30 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.660 09:30:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.660 09:30:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.660 09:30:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.660 09:30:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.660 09:30:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.660 09:30:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.660 09:30:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.660 09:30:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.660 09:30:30 -- spdk/autotest.sh@48 -- # udevadm_pid=66064 00:04:52.660 09:30:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.660 09:30:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.660 09:30:30 -- pm/common@17 -- # local monitor 00:04:52.660 09:30:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.660 09:30:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.660 09:30:30 -- pm/common@21 -- # date +%s 00:04:52.660 09:30:30 -- pm/common@25 -- # sleep 1 00:04:52.660 09:30:30 -- pm/common@21 -- # date +%s 00:04:52.660 09:30:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721813430 00:04:52.660 09:30:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721813430 00:04:52.660 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721813430_collect-vmstat.pm.log 00:04:52.660 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721813430_collect-cpu-load.pm.log 00:04:53.612 09:30:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.612 09:30:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.612 09:30:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.612 09:30:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.612 09:30:31 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.612 09:30:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:53.612 09:30:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.871 09:30:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:53.871 09:30:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:53.871 09:30:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:53.871 09:30:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:53.871 09:30:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:53.871 09:30:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.871 09:30:31 -- common/autotest_common.sh@1455 -- # uname 00:04:53.871 09:30:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:53.871 09:30:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.871 09:30:31 -- common/autotest_common.sh@1475 -- # uname 00:04:53.871 09:30:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:53.871 09:30:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:53.871 09:30:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:53.871 09:30:31 -- spdk/autotest.sh@72 -- # hash lcov 00:04:53.871 09:30:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:53.871 09:30:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:53.871 --rc lcov_branch_coverage=1 00:04:53.871 --rc lcov_function_coverage=1 00:04:53.871 --rc genhtml_branch_coverage=1 00:04:53.871 --rc genhtml_function_coverage=1 00:04:53.871 --rc genhtml_legend=1 00:04:53.871 --rc geninfo_all_blocks=1 00:04:53.871 ' 00:04:53.871 09:30:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:53.871 --rc lcov_branch_coverage=1 00:04:53.871 --rc lcov_function_coverage=1 00:04:53.871 --rc genhtml_branch_coverage=1 00:04:53.871 --rc genhtml_function_coverage=1 00:04:53.871 --rc genhtml_legend=1 00:04:53.871 --rc geninfo_all_blocks=1 00:04:53.871 ' 00:04:53.871 09:30:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:53.871 --rc lcov_branch_coverage=1 00:04:53.871 --rc lcov_function_coverage=1 00:04:53.871 --rc genhtml_branch_coverage=1 00:04:53.871 --rc genhtml_function_coverage=1 00:04:53.871 --rc genhtml_legend=1 00:04:53.871 --rc geninfo_all_blocks=1 00:04:53.871 --no-external' 00:04:53.871 09:30:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:53.871 --rc lcov_branch_coverage=1 00:04:53.871 --rc lcov_function_coverage=1 00:04:53.871 --rc genhtml_branch_coverage=1 00:04:53.871 --rc genhtml_function_coverage=1 00:04:53.871 --rc genhtml_legend=1 00:04:53.871 --rc geninfo_all_blocks=1 00:04:53.871 --no-external' 00:04:53.871 09:30:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:53.871 lcov: LCOV version 1.14 00:04:53.871 09:30:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.752 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.752 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:20.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:20.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:20.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:20.975 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:20.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:20.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:23.508 09:31:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:23.508 09:31:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.508 09:31:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.508 09:31:01 -- spdk/autotest.sh@91 -- # rm -f 00:05:23.508 09:31:01 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.778 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:25.075 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:25.075 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:25.075 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:25.075 09:31:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:25.075 09:31:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:25.075 09:31:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:25.075 09:31:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.075 09:31:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:25.075 09:31:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:25.075 09:31:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.075 09:31:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:25.075 09:31:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.075 09:31:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.075 09:31:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:25.075 09:31:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:25.075 09:31:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:25.075 No valid GPT data, bailing 00:05:25.075 09:31:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.075 09:31:02 -- scripts/common.sh@391 -- # pt= 00:05:25.075 09:31:02 -- scripts/common.sh@392 -- # return 1 00:05:25.075 09:31:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:25.075 1+0 records in 00:05:25.075 1+0 records out 00:05:25.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188327 s, 55.7 MB/s 00:05:25.075 09:31:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.075 09:31:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.075 09:31:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:25.075 09:31:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:25.075 09:31:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:25.075 No valid GPT data, bailing 00:05:25.075 09:31:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.075 09:31:02 -- scripts/common.sh@391 -- # pt= 00:05:25.075 09:31:02 -- scripts/common.sh@392 -- # return 1 00:05:25.075 09:31:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:25.075 1+0 records in 00:05:25.075 1+0 records out 00:05:25.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656354 s, 160 MB/s 00:05:25.075 09:31:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.075 09:31:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.075 09:31:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:25.075 09:31:02 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:25.075 09:31:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:25.334 No valid GPT data, bailing 00:05:25.334 09:31:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:25.334 09:31:02 -- scripts/common.sh@391 -- # pt= 00:05:25.334 09:31:02 -- scripts/common.sh@392 -- # return 1 00:05:25.334 09:31:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:25.334 1+0 records in 00:05:25.334 1+0 records out 00:05:25.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00254515 s, 412 MB/s 00:05:25.334 09:31:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.334 09:31:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.334 09:31:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:25.334 09:31:02 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:25.334 09:31:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:25.334 No valid GPT data, bailing 00:05:25.334 09:31:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:25.334 09:31:03 -- scripts/common.sh@391 -- # pt= 00:05:25.334 09:31:03 -- scripts/common.sh@392 -- # return 1 00:05:25.334 09:31:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:25.334 1+0 records in 00:05:25.334 1+0 records out 00:05:25.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621134 s, 169 MB/s 00:05:25.334 09:31:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.334 09:31:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.334 09:31:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:25.334 09:31:03 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:25.334 09:31:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:25.334 No valid GPT data, bailing 00:05:25.334 09:31:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:25.334 09:31:03 -- scripts/common.sh@391 -- # pt= 00:05:25.334 09:31:03 -- scripts/common.sh@392 -- # return 1 00:05:25.334 09:31:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:25.334 1+0 records in 00:05:25.334 1+0 records out 00:05:25.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483125 s, 217 MB/s 00:05:25.334 09:31:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.334 09:31:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:25.334 09:31:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:25.334 09:31:03 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:25.334 09:31:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:25.592 No valid GPT data, bailing 00:05:25.592 09:31:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:25.592 09:31:03 -- scripts/common.sh@391 -- # pt= 00:05:25.592 09:31:03 -- scripts/common.sh@392 -- # return 1 00:05:25.592 09:31:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:25.592 1+0 records in 00:05:25.592 1+0 records out 00:05:25.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457941 s, 229 MB/s 00:05:25.592 09:31:03 -- spdk/autotest.sh@118 -- # sync 00:05:25.592 09:31:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:25.592 09:31:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:25.592 09:31:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:28.121 09:31:05 -- spdk/autotest.sh@124 -- # uname -s 00:05:28.121 09:31:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:28.121 09:31:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:28.121 09:31:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.121 09:31:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.121 09:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.121 ************************************ 00:05:28.121 START TEST setup.sh 00:05:28.121 ************************************ 00:05:28.121 09:31:05 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:28.121 * Looking for test storage... 00:05:28.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:28.121 09:31:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:28.121 09:31:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:28.121 09:31:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:28.121 09:31:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.121 09:31:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.121 09:31:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:28.121 ************************************ 00:05:28.121 START TEST acl 00:05:28.121 ************************************ 00:05:28.121 09:31:05 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:28.380 * Looking for test storage... 00:05:28.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:28.380 09:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:28.380 09:31:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:28.380 09:31:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.380 09:31:05 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.754 09:31:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:29.754 09:31:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:29.754 09:31:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.754 09:31:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:29.754 09:31:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.754 09:31:07 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.320 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:30.320 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.320 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.888 Hugepages 00:05:30.888 node hugesize free / total 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.888 00:05:30.888 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.888 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.146 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:31.146 09:31:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:31.146 09:31:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:31.146 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.147 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.405 09:31:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.405 09:31:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.663 09:31:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:31.663 09:31:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:31.663 09:31:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.663 09:31:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.663 09:31:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:31.663 ************************************ 00:05:31.663 START TEST denied 00:05:31.663 ************************************ 00:05:31.663 09:31:09 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:31.663 09:31:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:31.663 09:31:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:31.663 09:31:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:31.663 09:31:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.663 09:31:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.040 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:33.040 09:31:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:33.040 09:31:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:33.040 09:31:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.040 09:31:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:33.040 09:31:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:33.041 09:31:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.041 09:31:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.041 09:31:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:33.041 09:31:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.041 09:31:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.606 00:05:39.606 real 0m7.808s 00:05:39.606 user 0m1.015s 00:05:39.606 sys 0m1.885s 00:05:39.606 09:31:17 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.606 ************************************ 00:05:39.606 END TEST denied 00:05:39.606 ************************************ 00:05:39.606 09:31:17 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:39.606 09:31:17 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:39.606 09:31:17 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.606 09:31:17 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.606 09:31:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:39.606 ************************************ 00:05:39.606 START TEST allowed 00:05:39.606 ************************************ 00:05:39.606 09:31:17 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:39.606 09:31:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:39.606 09:31:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:39.606 09:31:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:39.606 09:31:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.606 09:31:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.982 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.982 09:31:18 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.392 00:05:42.392 real 0m2.902s 00:05:42.392 user 0m1.179s 00:05:42.392 sys 0m1.751s 00:05:42.392 ************************************ 00:05:42.392 END TEST allowed 00:05:42.392 ************************************ 00:05:42.392 09:31:20 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.392 09:31:20 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:42.392 ************************************ 00:05:42.392 END TEST acl 00:05:42.392 ************************************ 00:05:42.392 00:05:42.392 real 0m14.200s 00:05:42.392 user 0m3.640s 00:05:42.392 sys 0m5.707s 00:05:42.392 09:31:20 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.392 09:31:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:42.392 09:31:20 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:42.392 09:31:20 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.392 09:31:20 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.392 09:31:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:42.392 ************************************ 00:05:42.392 START TEST hugepages 00:05:42.392 ************************************ 00:05:42.392 09:31:20 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:42.652 * Looking for test storage... 00:05:42.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.652 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4375192 kB' 'MemAvailable: 7373048 kB' 'Buffers: 2436 kB' 'Cached: 3200612 kB' 'SwapCached: 0 kB' 'Active: 452276 kB' 'Inactive: 2860508 kB' 'Active(anon): 120252 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 111368 kB' 'Mapped: 48844 kB' 'Shmem: 10516 kB' 'KReclaimable: 84528 kB' 'Slab: 166900 kB' 'SReclaimable: 84528 kB' 'SUnreclaim: 82372 kB' 'KernelStack: 6492 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.653 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:42.654 09:31:20 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:42.654 09:31:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.654 09:31:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.654 09:31:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 ************************************ 00:05:42.654 START TEST default_setup 00:05:42.654 ************************************ 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:42.654 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.655 09:31:20 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.175 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.175 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.175 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.175 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.175 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6509648 kB' 'MemAvailable: 9507212 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463944 kB' 'Inactive: 2860512 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122788 kB' 'Mapped: 48948 kB' 'Shmem: 10476 kB' 'KReclaimable: 83932 kB' 'Slab: 166240 kB' 'SReclaimable: 83932 kB' 'SUnreclaim: 82308 kB' 'KernelStack: 6576 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.176 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:44.177 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6509400 kB' 'MemAvailable: 9506964 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463380 kB' 'Inactive: 2860512 kB' 'Active(anon): 131356 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122468 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83932 kB' 'Slab: 166236 kB' 'SReclaimable: 83932 kB' 'SUnreclaim: 82304 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.441 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.442 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6509400 kB' 'MemAvailable: 9506964 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463592 kB' 'Inactive: 2860512 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83932 kB' 'Slab: 166232 kB' 'SReclaimable: 83932 kB' 'SUnreclaim: 82300 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.443 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.444 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:44.445 nr_hugepages=1024 00:05:44.445 resv_hugepages=0 00:05:44.445 surplus_hugepages=0 00:05:44.445 anon_hugepages=0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6509400 kB' 'MemAvailable: 9506964 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463436 kB' 'Inactive: 2860512 kB' 'Active(anon): 131412 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122516 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83932 kB' 'Slab: 166232 kB' 'SReclaimable: 83932 kB' 'SUnreclaim: 82300 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.445 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.446 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6509400 kB' 'MemUsed: 5732572 kB' 'SwapCached: 0 kB' 'Active: 463668 kB' 'Inactive: 2860512 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 3203036 kB' 'Mapped: 48852 kB' 'AnonPages: 122748 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83932 kB' 'Slab: 166232 kB' 'SReclaimable: 83932 kB' 'SUnreclaim: 82300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.447 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.448 node0=1024 expecting 1024 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.448 00:05:44.448 real 0m1.789s 00:05:44.448 user 0m0.672s 00:05:44.448 sys 0m1.081s 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.448 ************************************ 00:05:44.448 END TEST default_setup 00:05:44.448 ************************************ 00:05:44.448 09:31:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:44.448 09:31:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:44.448 09:31:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.448 09:31:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.448 09:31:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:44.448 ************************************ 00:05:44.448 START TEST per_node_1G_alloc 00:05:44.448 ************************************ 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.448 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.449 09:31:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.305 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.305 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.305 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.305 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:45.305 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550388 kB' 'MemAvailable: 10547972 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463900 kB' 'Inactive: 2860536 kB' 'Active(anon): 131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48976 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166276 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82348 kB' 'KernelStack: 6584 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55332 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.306 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550388 kB' 'MemAvailable: 10547972 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463704 kB' 'Inactive: 2860536 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48880 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166316 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82388 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.307 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.308 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550388 kB' 'MemAvailable: 10547972 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463704 kB' 'Inactive: 2860536 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48880 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166316 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82388 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.309 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.310 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.311 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:45.572 nr_hugepages=512 00:05:45.572 resv_hugepages=0 00:05:45.572 surplus_hugepages=0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.572 anon_hugepages=0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550388 kB' 'MemAvailable: 10547972 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463484 kB' 'Inactive: 2860536 kB' 'Active(anon): 131460 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 48880 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166312 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82384 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.572 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.573 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550388 kB' 'MemUsed: 4691584 kB' 'SwapCached: 0 kB' 'Active: 463472 kB' 'Inactive: 2860536 kB' 'Active(anon): 131448 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 3203036 kB' 'Mapped: 48880 kB' 'AnonPages: 122544 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83928 kB' 'Slab: 166312 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.574 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.575 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.576 node0=512 expecting 512 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:45.576 00:05:45.576 real 0m1.001s 00:05:45.576 user 0m0.430s 00:05:45.576 sys 0m0.596s 00:05:45.576 ************************************ 00:05:45.576 END TEST per_node_1G_alloc 00:05:45.576 ************************************ 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.576 09:31:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 09:31:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:45.576 09:31:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.576 09:31:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.576 09:31:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 ************************************ 00:05:45.576 START TEST even_2G_alloc 00:05:45.576 ************************************ 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.576 09:31:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.404 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.404 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.404 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.404 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.404 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6506132 kB' 'MemAvailable: 9503716 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463828 kB' 'Inactive: 2860536 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122640 kB' 'Mapped: 48944 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166300 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82372 kB' 'KernelStack: 6592 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.405 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6506520 kB' 'MemAvailable: 9504104 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463500 kB' 'Inactive: 2860536 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122576 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166292 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82364 kB' 'KernelStack: 6560 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.406 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.407 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.408 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6506520 kB' 'MemAvailable: 9504104 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463504 kB' 'Inactive: 2860536 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122576 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166292 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82364 kB' 'KernelStack: 6560 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.409 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.410 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:46.411 nr_hugepages=1024 00:05:46.411 resv_hugepages=0 00:05:46.411 surplus_hugepages=0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.411 anon_hugepages=0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6506520 kB' 'MemAvailable: 9504104 kB' 'Buffers: 2436 kB' 'Cached: 3200600 kB' 'SwapCached: 0 kB' 'Active: 463500 kB' 'Inactive: 2860536 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48852 kB' 'Shmem: 10476 kB' 'KReclaimable: 83928 kB' 'Slab: 166284 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82356 kB' 'KernelStack: 6560 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.411 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.673 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.674 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6506520 kB' 'MemUsed: 5735452 kB' 'SwapCached: 0 kB' 'Active: 463612 kB' 'Inactive: 2860536 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3203036 kB' 'Mapped: 48852 kB' 'AnonPages: 122696 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83928 kB' 'Slab: 166284 kB' 'SReclaimable: 83928 kB' 'SUnreclaim: 82356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.675 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.676 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:46.677 node0=1024 expecting 1024 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.677 00:05:46.677 real 0m0.996s 00:05:46.677 user 0m0.416s 00:05:46.677 sys 0m0.621s 00:05:46.677 ************************************ 00:05:46.677 END TEST even_2G_alloc 00:05:46.677 ************************************ 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.677 09:31:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:46.677 09:31:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:46.677 09:31:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.677 09:31:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.677 09:31:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:46.677 ************************************ 00:05:46.677 START TEST odd_alloc 00:05:46.677 ************************************ 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.677 09:31:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.508 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.508 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.508 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.508 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6494608 kB' 'MemAvailable: 9492196 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 460040 kB' 'Inactive: 2860544 kB' 'Active(anon): 128016 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118856 kB' 'Mapped: 48232 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166268 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82348 kB' 'KernelStack: 6496 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.508 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.509 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6494608 kB' 'MemAvailable: 9492196 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459972 kB' 'Inactive: 2860544 kB' 'Active(anon): 127948 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 48232 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166232 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82312 kB' 'KernelStack: 6464 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.510 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.511 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6494608 kB' 'MemAvailable: 9492196 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459836 kB' 'Inactive: 2860544 kB' 'Active(anon): 127812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118908 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166228 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82308 kB' 'KernelStack: 6480 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.512 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.513 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:47.514 nr_hugepages=1025 00:05:47.514 resv_hugepages=0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.514 surplus_hugepages=0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.514 anon_hugepages=0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6494608 kB' 'MemAvailable: 9492196 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459840 kB' 'Inactive: 2860544 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118920 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166228 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82308 kB' 'KernelStack: 6480 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.514 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.515 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6494608 kB' 'MemUsed: 5747364 kB' 'SwapCached: 0 kB' 'Active: 459836 kB' 'Inactive: 2860544 kB' 'Active(anon): 127812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3203044 kB' 'Mapped: 48108 kB' 'AnonPages: 118908 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83920 kB' 'Slab: 166228 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.516 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.776 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:47.777 node0=1025 expecting 1025 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:47.777 00:05:47.777 real 0m0.996s 00:05:47.777 user 0m0.429s 00:05:47.777 sys 0m0.608s 00:05:47.777 ************************************ 00:05:47.777 END TEST odd_alloc 00:05:47.777 ************************************ 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.777 09:31:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.777 09:31:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:47.777 09:31:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.777 09:31:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.777 09:31:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.777 ************************************ 00:05:47.777 START TEST custom_alloc 00:05:47.777 ************************************ 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.777 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.778 09:31:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.345 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.345 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.345 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.345 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.610 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549768 kB' 'MemAvailable: 10547352 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459996 kB' 'Inactive: 2860540 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118812 kB' 'Mapped: 48348 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166292 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82372 kB' 'KernelStack: 6472 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.611 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549516 kB' 'MemAvailable: 10547100 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459720 kB' 'Inactive: 2860540 kB' 'Active(anon): 127696 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166288 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82368 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.612 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.613 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550228 kB' 'MemAvailable: 10547812 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459772 kB' 'Inactive: 2860540 kB' 'Active(anon): 127748 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118840 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166288 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82368 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.614 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.615 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:48.616 nr_hugepages=512 00:05:48.616 resv_hugepages=0 00:05:48.616 surplus_hugepages=0 00:05:48.616 anon_hugepages=0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550228 kB' 'MemAvailable: 10547812 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459728 kB' 'Inactive: 2860540 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118804 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166288 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82368 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.616 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.617 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7550880 kB' 'MemUsed: 4691092 kB' 'SwapCached: 0 kB' 'Active: 459740 kB' 'Inactive: 2860540 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3203040 kB' 'Mapped: 48108 kB' 'AnonPages: 118804 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83920 kB' 'Slab: 166284 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.618 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.619 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.620 node0=512 expecting 512 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:48.620 09:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:48.879 ************************************ 00:05:48.879 END TEST custom_alloc 00:05:48.879 ************************************ 00:05:48.879 00:05:48.879 real 0m1.019s 00:05:48.879 user 0m0.470s 00:05:48.879 sys 0m0.573s 00:05:48.879 09:31:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.879 09:31:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 09:31:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:48.879 09:31:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.879 09:31:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.879 09:31:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 ************************************ 00:05:48.879 START TEST no_shrink_alloc 00:05:48.879 ************************************ 00:05:48.879 09:31:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:48.879 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:48.879 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:48.879 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.880 09:31:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.449 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.449 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.449 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.449 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6502320 kB' 'MemAvailable: 9499904 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 460044 kB' 'Inactive: 2860540 kB' 'Active(anon): 128020 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 48200 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166204 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82284 kB' 'KernelStack: 6480 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.449 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.450 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.713 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.713 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6502320 kB' 'MemAvailable: 9499904 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459752 kB' 'Inactive: 2860540 kB' 'Active(anon): 127728 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118828 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166200 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82280 kB' 'KernelStack: 6464 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.715 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6502320 kB' 'MemAvailable: 9499904 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459912 kB' 'Inactive: 2860540 kB' 'Active(anon): 127888 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118988 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166196 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82276 kB' 'KernelStack: 6464 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.716 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.717 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:49.718 nr_hugepages=1024 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:49.718 resv_hugepages=0 00:05:49.718 surplus_hugepages=0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.718 anon_hugepages=0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.718 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6502320 kB' 'MemAvailable: 9499904 kB' 'Buffers: 2436 kB' 'Cached: 3200604 kB' 'SwapCached: 0 kB' 'Active: 459760 kB' 'Inactive: 2860540 kB' 'Active(anon): 127736 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118824 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166196 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82276 kB' 'KernelStack: 6464 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.719 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6502320 kB' 'MemUsed: 5739652 kB' 'SwapCached: 0 kB' 'Active: 459824 kB' 'Inactive: 2860540 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 3203040 kB' 'Mapped: 48108 kB' 'AnonPages: 118944 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83920 kB' 'Slab: 166196 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:49.720 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.721 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:49.722 node0=1024 expecting 1024 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.722 09:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.554 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.554 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.554 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.554 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.554 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:50.554 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:50.554 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6505164 kB' 'MemAvailable: 9502752 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459876 kB' 'Inactive: 2860544 kB' 'Active(anon): 127852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118972 kB' 'Mapped: 48228 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166140 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82220 kB' 'KernelStack: 6440 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.555 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.556 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6505164 kB' 'MemAvailable: 9502752 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459780 kB' 'Inactive: 2860544 kB' 'Active(anon): 127756 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166140 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82220 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.557 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.558 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6505164 kB' 'MemAvailable: 9502752 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459780 kB' 'Inactive: 2860544 kB' 'Active(anon): 127756 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166140 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82220 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.559 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.560 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.561 nr_hugepages=1024 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:50.561 resv_hugepages=0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:50.561 surplus_hugepages=0 00:05:50.561 anon_hugepages=0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6505164 kB' 'MemAvailable: 9502752 kB' 'Buffers: 2436 kB' 'Cached: 3200608 kB' 'SwapCached: 0 kB' 'Active: 459776 kB' 'Inactive: 2860544 kB' 'Active(anon): 127752 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118868 kB' 'Mapped: 48108 kB' 'Shmem: 10476 kB' 'KReclaimable: 83920 kB' 'Slab: 166140 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82220 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.561 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.562 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6505164 kB' 'MemUsed: 5736808 kB' 'SwapCached: 0 kB' 'Active: 459724 kB' 'Inactive: 2860544 kB' 'Active(anon): 127700 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2860544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3203044 kB' 'Mapped: 48108 kB' 'AnonPages: 118820 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83920 kB' 'Slab: 166140 kB' 'SReclaimable: 83920 kB' 'SUnreclaim: 82220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.563 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.565 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.825 node0=1024 expecting 1024 00:05:50.825 ************************************ 00:05:50.825 END TEST no_shrink_alloc 00:05:50.825 ************************************ 00:05:50.825 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:50.825 09:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:50.825 00:05:50.825 real 0m1.877s 00:05:50.825 user 0m0.786s 00:05:50.825 sys 0m1.167s 00:05:50.825 09:31:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.825 09:31:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:50.825 09:31:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:50.825 00:05:50.825 real 0m8.301s 00:05:50.825 user 0m3.407s 00:05:50.825 sys 0m5.032s 00:05:50.825 ************************************ 00:05:50.825 END TEST hugepages 00:05:50.825 ************************************ 00:05:50.825 09:31:28 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.825 09:31:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:50.825 09:31:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:50.825 09:31:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.825 09:31:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.825 09:31:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:50.825 ************************************ 00:05:50.825 START TEST driver 00:05:50.825 ************************************ 00:05:50.825 09:31:28 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:50.825 * Looking for test storage... 00:05:50.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:50.825 09:31:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:50.825 09:31:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:50.825 09:31:28 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.439 09:31:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:57.439 09:31:34 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.439 09:31:34 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.439 09:31:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:57.439 ************************************ 00:05:57.439 START TEST guess_driver 00:05:57.439 ************************************ 00:05:57.439 09:31:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:57.439 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:57.439 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:57.439 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:57.439 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:57.440 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:57.440 Looking for driver=uio_pci_generic 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.440 09:31:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.007 09:31:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:58.007 09:31:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:58.007 09:31:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.574 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.832 09:31:36 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.416 00:06:05.416 real 0m7.935s 00:06:05.416 user 0m0.982s 00:06:05.416 sys 0m2.141s 00:06:05.416 09:31:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.416 09:31:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:05.416 ************************************ 00:06:05.416 END TEST guess_driver 00:06:05.416 ************************************ 00:06:05.416 00:06:05.416 real 0m14.363s 00:06:05.416 user 0m1.503s 00:06:05.416 sys 0m3.252s 00:06:05.416 09:31:42 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.416 09:31:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:05.416 ************************************ 00:06:05.416 END TEST driver 00:06:05.416 ************************************ 00:06:05.416 09:31:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:05.416 09:31:42 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.416 09:31:42 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.416 09:31:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:05.416 ************************************ 00:06:05.416 START TEST devices 00:06:05.416 ************************************ 00:06:05.416 09:31:42 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:05.416 * Looking for test storage... 00:06:05.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:05.416 09:31:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:05.416 09:31:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:05.416 09:31:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:05.416 09:31:43 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:06.793 09:31:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:06.793 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:06.793 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:06.793 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:06.793 No valid GPT data, bailing 00:06:07.052 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:07.052 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.052 09:31:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:07.052 09:31:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:07.052 09:31:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:07.052 09:31:44 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:07.052 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:07.053 No valid GPT data, bailing 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:07.053 No valid GPT data, bailing 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:07.053 No valid GPT data, bailing 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.053 09:31:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:07.053 09:31:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.053 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:07.312 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:07.312 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:07.312 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:07.312 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:07.312 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:07.312 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:07.312 No valid GPT data, bailing 00:06:07.312 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:07.312 09:31:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.313 09:31:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:07.313 09:31:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:07.313 09:31:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:07.313 09:31:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:07.313 09:31:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:07.313 09:31:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:07.313 09:31:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:07.313 No valid GPT data, bailing 00:06:07.313 09:31:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:07.313 09:31:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:07.313 09:31:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:07.313 09:31:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:07.313 09:31:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:07.313 09:31:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:07.313 09:31:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:07.313 09:31:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:07.313 09:31:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:07.313 09:31:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:07.313 09:31:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:07.313 09:31:45 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.313 09:31:45 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.313 09:31:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 ************************************ 00:06:07.313 START TEST nvme_mount 00:06:07.313 ************************************ 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:07.313 09:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:08.689 Creating new GPT entries in memory. 00:06:08.689 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:08.689 other utilities. 00:06:08.689 09:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:08.689 09:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:08.689 09:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:08.689 09:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:08.690 09:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:09.626 Creating new GPT entries in memory. 00:06:09.626 The operation has completed successfully. 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71852 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:09.626 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.627 09:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.886 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.145 09:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.402 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.402 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.660 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:10.919 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:10.919 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.919 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:11.177 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:11.177 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:11.177 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:11.177 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.177 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.178 09:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.437 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.695 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.696 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.696 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.696 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.696 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.696 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.954 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.954 09:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:12.215 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.474 09:31:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.733 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.991 09:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.250 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.250 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:13.818 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:13.818 00:06:13.818 real 0m6.302s 00:06:13.818 user 0m1.667s 00:06:13.818 sys 0m2.321s 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.818 09:31:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:13.818 ************************************ 00:06:13.818 END TEST nvme_mount 00:06:13.818 ************************************ 00:06:13.818 09:31:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:13.818 09:31:51 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.818 09:31:51 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.818 09:31:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:13.818 ************************************ 00:06:13.818 START TEST dm_mount 00:06:13.818 ************************************ 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:13.818 09:31:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:14.756 Creating new GPT entries in memory. 00:06:14.756 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:14.756 other utilities. 00:06:14.756 09:31:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:14.756 09:31:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:14.756 09:31:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:14.756 09:31:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:14.756 09:31:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:15.713 Creating new GPT entries in memory. 00:06:15.713 The operation has completed successfully. 00:06:15.713 09:31:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:15.713 09:31:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.713 09:31:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:15.713 09:31:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:15.713 09:31:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:17.101 The operation has completed successfully. 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72491 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.101 09:31:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.361 09:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.361 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.361 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.619 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.619 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.619 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.619 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.878 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.878 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.138 09:31:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.397 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.656 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.223 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.223 09:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.223 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:19.481 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:19.481 00:06:19.481 real 0m5.673s 00:06:19.481 user 0m1.089s 00:06:19.481 sys 0m1.512s 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.481 09:31:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:19.481 ************************************ 00:06:19.481 END TEST dm_mount 00:06:19.481 ************************************ 00:06:19.481 09:31:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:19.481 09:31:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:19.481 09:31:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:19.481 09:31:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.482 09:31:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:19.482 09:31:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.482 09:31:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:19.741 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:19.741 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:19.741 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:19.741 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.741 09:31:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:19.741 00:06:19.741 real 0m14.526s 00:06:19.741 user 0m3.783s 00:06:19.741 sys 0m5.069s 00:06:19.741 09:31:57 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.741 09:31:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.741 ************************************ 00:06:19.741 END TEST devices 00:06:19.741 ************************************ 00:06:19.741 00:06:19.741 real 0m51.763s 00:06:19.741 user 0m12.450s 00:06:19.741 sys 0m19.318s 00:06:19.741 09:31:57 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.741 09:31:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:19.741 ************************************ 00:06:19.741 END TEST setup.sh 00:06:19.741 ************************************ 00:06:20.000 09:31:57 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:20.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:21.180 Hugepages 00:06:21.180 node hugesize free / total 00:06:21.180 node0 1048576kB 0 / 0 00:06:21.180 node0 2048kB 2048 / 2048 00:06:21.180 00:06:21.180 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:21.180 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:21.180 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:21.438 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:21.438 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:21.697 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:21.697 09:31:59 -- spdk/autotest.sh@130 -- # uname -s 00:06:21.697 09:31:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:21.697 09:31:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:21.697 09:31:59 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:22.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.832 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.832 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.091 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.091 09:32:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:24.465 09:32:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:24.465 09:32:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:24.465 09:32:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:24.465 09:32:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:24.465 09:32:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:24.465 09:32:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:24.465 09:32:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:24.465 09:32:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:24.465 09:32:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:24.465 09:32:01 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:24.465 09:32:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:24.465 09:32:01 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:24.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.026 Waiting for block devices as requested 00:06:25.026 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:25.285 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:25.285 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:25.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.815 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:30.815 09:32:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:30.815 09:32:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1557 -- # continue 00:06:30.815 09:32:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:30.815 09:32:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1557 -- # continue 00:06:30.815 09:32:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:30.815 09:32:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1557 -- # continue 00:06:30.815 09:32:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:30.815 09:32:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:30.815 09:32:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:30.815 09:32:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:30.815 09:32:08 -- common/autotest_common.sh@1557 -- # continue 00:06:30.815 09:32:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:30.815 09:32:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.815 09:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 09:32:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:30.815 09:32:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.815 09:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 09:32:08 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:31.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.024 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.024 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.024 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.283 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.283 09:32:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:32.283 09:32:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.283 09:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:32.283 09:32:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:32.283 09:32:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:32.283 09:32:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:32.283 09:32:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:32.283 09:32:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:32.283 09:32:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:32.283 09:32:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:32.283 09:32:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:32.283 09:32:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:32.283 09:32:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.283 09:32:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:32.541 09:32:10 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:32.541 09:32:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:32.541 09:32:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:32.541 09:32:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:32.541 09:32:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:32.541 09:32:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:32.541 09:32:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:32.541 09:32:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:32.541 09:32:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:32.541 09:32:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:32.542 09:32:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:32.542 09:32:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:32.542 09:32:10 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:32.542 09:32:10 -- common/autotest_common.sh@1593 -- # return 0 00:06:32.542 09:32:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:32.542 09:32:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:32.542 09:32:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:32.542 09:32:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:32.542 09:32:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:32.542 09:32:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.542 09:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 09:32:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:32.542 09:32:10 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:32.542 09:32:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.542 09:32:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.542 09:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 ************************************ 00:06:32.542 START TEST env 00:06:32.542 ************************************ 00:06:32.542 09:32:10 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:32.542 * Looking for test storage... 00:06:32.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:32.542 09:32:10 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:32.542 09:32:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.542 09:32:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.542 09:32:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 ************************************ 00:06:32.542 START TEST env_memory 00:06:32.542 ************************************ 00:06:32.542 09:32:10 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:32.542 00:06:32.542 00:06:32.542 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.542 http://cunit.sourceforge.net/ 00:06:32.542 00:06:32.542 00:06:32.542 Suite: memory 00:06:32.800 Test: alloc and free memory map ...[2024-07-24 09:32:10.374528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:32.800 passed 00:06:32.800 Test: mem map translation ...[2024-07-24 09:32:10.414884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:32.800 [2024-07-24 09:32:10.414935] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:32.800 [2024-07-24 09:32:10.415001] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:32.800 [2024-07-24 09:32:10.415024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:32.800 passed 00:06:32.800 Test: mem map registration ...[2024-07-24 09:32:10.478749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:32.801 [2024-07-24 09:32:10.478798] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:32.801 passed 00:06:32.801 Test: mem map adjacent registrations ...passed 00:06:32.801 00:06:32.801 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.801 suites 1 1 n/a 0 0 00:06:32.801 tests 4 4 4 0 0 00:06:32.801 asserts 152 152 152 0 n/a 00:06:32.801 00:06:32.801 Elapsed time = 0.227 seconds 00:06:32.801 00:06:32.801 real 0m0.280s 00:06:32.801 user 0m0.246s 00:06:32.801 sys 0m0.026s 00:06:32.801 09:32:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.801 09:32:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:32.801 ************************************ 00:06:32.801 END TEST env_memory 00:06:32.801 ************************************ 00:06:33.059 09:32:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:33.059 09:32:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.059 09:32:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.059 09:32:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 ************************************ 00:06:33.059 START TEST env_vtophys 00:06:33.059 ************************************ 00:06:33.059 09:32:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:33.059 EAL: lib.eal log level changed from notice to debug 00:06:33.059 EAL: Detected lcore 0 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 1 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 2 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 3 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 4 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 5 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 6 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 7 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 8 as core 0 on socket 0 00:06:33.059 EAL: Detected lcore 9 as core 0 on socket 0 00:06:33.059 EAL: Maximum logical cores by configuration: 128 00:06:33.059 EAL: Detected CPU lcores: 10 00:06:33.059 EAL: Detected NUMA nodes: 1 00:06:33.059 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:33.059 EAL: Detected shared linkage of DPDK 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:33.059 EAL: Registered [vdev] bus. 00:06:33.059 EAL: bus.vdev log level changed from disabled to notice 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:33.059 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:33.059 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:33.059 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:33.059 EAL: No shared files mode enabled, IPC will be disabled 00:06:33.059 EAL: No shared files mode enabled, IPC is disabled 00:06:33.059 EAL: Selected IOVA mode 'PA' 00:06:33.059 EAL: Probing VFIO support... 00:06:33.059 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:33.059 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:33.059 EAL: Ask a virtual area of 0x2e000 bytes 00:06:33.059 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:33.059 EAL: Setting up physically contiguous memory... 00:06:33.059 EAL: Setting maximum number of open files to 524288 00:06:33.059 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:33.059 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:33.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.059 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:33.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.059 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:33.059 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:33.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.059 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:33.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.059 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:33.059 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:33.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.059 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:33.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.059 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:33.059 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:33.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.059 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:33.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.059 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:33.059 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:33.059 EAL: Hugepages will be freed exactly as allocated. 00:06:33.059 EAL: No shared files mode enabled, IPC is disabled 00:06:33.060 EAL: No shared files mode enabled, IPC is disabled 00:06:33.060 EAL: TSC frequency is ~2490000 KHz 00:06:33.060 EAL: Main lcore 0 is ready (tid=7f45edbe5a40;cpuset=[0]) 00:06:33.060 EAL: Trying to obtain current memory policy. 00:06:33.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.060 EAL: Restoring previous memory policy: 0 00:06:33.060 EAL: request: mp_malloc_sync 00:06:33.060 EAL: No shared files mode enabled, IPC is disabled 00:06:33.060 EAL: Heap on socket 0 was expanded by 2MB 00:06:33.060 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:33.060 EAL: No shared files mode enabled, IPC is disabled 00:06:33.060 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:33.060 EAL: Mem event callback 'spdk:(nil)' registered 00:06:33.060 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:33.060 00:06:33.060 00:06:33.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.060 http://cunit.sourceforge.net/ 00:06:33.060 00:06:33.060 00:06:33.060 Suite: components_suite 00:06:33.627 Test: vtophys_malloc_test ...passed 00:06:33.627 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 4MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 4MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 6MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 6MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 10MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 10MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 18MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 18MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 34MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 34MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 66MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 66MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 130MB 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was shrunk by 130MB 00:06:33.627 EAL: Trying to obtain current memory policy. 00:06:33.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.627 EAL: Restoring previous memory policy: 4 00:06:33.627 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.627 EAL: request: mp_malloc_sync 00:06:33.627 EAL: No shared files mode enabled, IPC is disabled 00:06:33.627 EAL: Heap on socket 0 was expanded by 258MB 00:06:33.886 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.886 EAL: request: mp_malloc_sync 00:06:33.886 EAL: No shared files mode enabled, IPC is disabled 00:06:33.886 EAL: Heap on socket 0 was shrunk by 258MB 00:06:33.886 EAL: Trying to obtain current memory policy. 00:06:33.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.886 EAL: Restoring previous memory policy: 4 00:06:33.886 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.886 EAL: request: mp_malloc_sync 00:06:33.886 EAL: No shared files mode enabled, IPC is disabled 00:06:33.886 EAL: Heap on socket 0 was expanded by 514MB 00:06:34.144 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.144 EAL: request: mp_malloc_sync 00:06:34.144 EAL: No shared files mode enabled, IPC is disabled 00:06:34.144 EAL: Heap on socket 0 was shrunk by 514MB 00:06:34.144 EAL: Trying to obtain current memory policy. 00:06:34.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.403 EAL: Restoring previous memory policy: 4 00:06:34.403 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.403 EAL: request: mp_malloc_sync 00:06:34.403 EAL: No shared files mode enabled, IPC is disabled 00:06:34.403 EAL: Heap on socket 0 was expanded by 1026MB 00:06:34.403 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.662 EAL: request: mp_malloc_sync 00:06:34.662 EAL: No shared files mode enabled, IPC is disabled 00:06:34.662 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:34.662 passed 00:06:34.662 00:06:34.662 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.662 suites 1 1 n/a 0 0 00:06:34.662 tests 2 2 2 0 0 00:06:34.662 asserts 5246 5246 5246 0 n/a 00:06:34.662 00:06:34.662 Elapsed time = 1.451 seconds 00:06:34.662 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.662 EAL: request: mp_malloc_sync 00:06:34.662 EAL: No shared files mode enabled, IPC is disabled 00:06:34.662 EAL: Heap on socket 0 was shrunk by 2MB 00:06:34.662 EAL: No shared files mode enabled, IPC is disabled 00:06:34.662 EAL: No shared files mode enabled, IPC is disabled 00:06:34.662 EAL: No shared files mode enabled, IPC is disabled 00:06:34.662 00:06:34.662 real 0m1.721s 00:06:34.662 user 0m0.805s 00:06:34.662 sys 0m0.788s 00:06:34.662 09:32:12 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.662 09:32:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:34.662 ************************************ 00:06:34.662 END TEST env_vtophys 00:06:34.662 ************************************ 00:06:34.662 09:32:12 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:34.662 09:32:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.662 09:32:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.662 09:32:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.662 ************************************ 00:06:34.662 START TEST env_pci 00:06:34.662 ************************************ 00:06:34.662 09:32:12 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:34.662 00:06:34.662 00:06:34.662 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.662 http://cunit.sourceforge.net/ 00:06:34.662 00:06:34.662 00:06:34.662 Suite: pci 00:06:34.662 Test: pci_hook ...[2024-07-24 09:32:12.473726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74287 has claimed it 00:06:34.921 passed 00:06:34.921 00:06:34.921 EAL: Cannot find device (10000:00:01.0) 00:06:34.921 EAL: Failed to attach device on primary process 00:06:34.921 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.921 suites 1 1 n/a 0 0 00:06:34.921 tests 1 1 1 0 0 00:06:34.921 asserts 25 25 25 0 n/a 00:06:34.921 00:06:34.921 Elapsed time = 0.007 seconds 00:06:34.921 00:06:34.921 real 0m0.097s 00:06:34.921 user 0m0.035s 00:06:34.921 sys 0m0.062s 00:06:34.921 09:32:12 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.921 ************************************ 00:06:34.921 END TEST env_pci 00:06:34.921 ************************************ 00:06:34.921 09:32:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:34.921 09:32:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:34.921 09:32:12 env -- env/env.sh@15 -- # uname 00:06:34.921 09:32:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:34.921 09:32:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:34.921 09:32:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.921 09:32:12 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:34.921 09:32:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.921 09:32:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.921 ************************************ 00:06:34.921 START TEST env_dpdk_post_init 00:06:34.921 ************************************ 00:06:34.921 09:32:12 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.921 EAL: Detected CPU lcores: 10 00:06:34.921 EAL: Detected NUMA nodes: 1 00:06:34.921 EAL: Detected shared linkage of DPDK 00:06:34.921 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.921 EAL: Selected IOVA mode 'PA' 00:06:35.179 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.179 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:35.179 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:35.179 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:35.179 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:35.179 Starting DPDK initialization... 00:06:35.179 Starting SPDK post initialization... 00:06:35.179 SPDK NVMe probe 00:06:35.179 Attaching to 0000:00:10.0 00:06:35.179 Attaching to 0000:00:11.0 00:06:35.179 Attaching to 0000:00:12.0 00:06:35.179 Attaching to 0000:00:13.0 00:06:35.179 Attached to 0000:00:10.0 00:06:35.179 Attached to 0000:00:11.0 00:06:35.179 Attached to 0000:00:13.0 00:06:35.179 Attached to 0000:00:12.0 00:06:35.179 Cleaning up... 00:06:35.179 00:06:35.179 real 0m0.274s 00:06:35.179 user 0m0.080s 00:06:35.179 sys 0m0.096s 00:06:35.179 ************************************ 00:06:35.179 END TEST env_dpdk_post_init 00:06:35.179 ************************************ 00:06:35.179 09:32:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.179 09:32:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.179 09:32:12 env -- env/env.sh@26 -- # uname 00:06:35.179 09:32:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:35.179 09:32:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.179 09:32:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.179 09:32:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.179 09:32:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.179 ************************************ 00:06:35.179 START TEST env_mem_callbacks 00:06:35.179 ************************************ 00:06:35.179 09:32:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.179 EAL: Detected CPU lcores: 10 00:06:35.179 EAL: Detected NUMA nodes: 1 00:06:35.179 EAL: Detected shared linkage of DPDK 00:06:35.438 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:35.438 EAL: Selected IOVA mode 'PA' 00:06:35.438 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.438 00:06:35.438 00:06:35.438 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.438 http://cunit.sourceforge.net/ 00:06:35.438 00:06:35.438 00:06:35.438 Suite: memory 00:06:35.438 Test: test ... 00:06:35.438 register 0x200000200000 2097152 00:06:35.438 malloc 3145728 00:06:35.438 register 0x200000400000 4194304 00:06:35.438 buf 0x200000500000 len 3145728 PASSED 00:06:35.438 malloc 64 00:06:35.438 buf 0x2000004fff40 len 64 PASSED 00:06:35.438 malloc 4194304 00:06:35.438 register 0x200000800000 6291456 00:06:35.438 buf 0x200000a00000 len 4194304 PASSED 00:06:35.438 free 0x200000500000 3145728 00:06:35.438 free 0x2000004fff40 64 00:06:35.438 unregister 0x200000400000 4194304 PASSED 00:06:35.438 free 0x200000a00000 4194304 00:06:35.438 unregister 0x200000800000 6291456 PASSED 00:06:35.438 malloc 8388608 00:06:35.438 register 0x200000400000 10485760 00:06:35.438 buf 0x200000600000 len 8388608 PASSED 00:06:35.438 free 0x200000600000 8388608 00:06:35.438 unregister 0x200000400000 10485760 PASSED 00:06:35.438 passed 00:06:35.438 00:06:35.438 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.438 suites 1 1 n/a 0 0 00:06:35.438 tests 1 1 1 0 0 00:06:35.438 asserts 15 15 15 0 n/a 00:06:35.438 00:06:35.438 Elapsed time = 0.012 seconds 00:06:35.438 00:06:35.438 real 0m0.211s 00:06:35.438 user 0m0.040s 00:06:35.438 sys 0m0.070s 00:06:35.438 09:32:13 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.438 ************************************ 00:06:35.438 END TEST env_mem_callbacks 00:06:35.438 ************************************ 00:06:35.438 09:32:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:35.438 ************************************ 00:06:35.438 END TEST env 00:06:35.438 ************************************ 00:06:35.438 00:06:35.438 real 0m3.031s 00:06:35.438 user 0m1.364s 00:06:35.438 sys 0m1.325s 00:06:35.438 09:32:13 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.438 09:32:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.697 09:32:13 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:35.697 09:32:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.697 09:32:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.697 09:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:35.697 ************************************ 00:06:35.697 START TEST rpc 00:06:35.698 ************************************ 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:35.698 * Looking for test storage... 00:06:35.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.698 09:32:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74406 00:06:35.698 09:32:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:35.698 09:32:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.698 09:32:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74406 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@831 -- # '[' -z 74406 ']' 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.698 09:32:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.956 [2024-07-24 09:32:13.521305] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:35.956 [2024-07-24 09:32:13.521439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ] 00:06:35.956 [2024-07-24 09:32:13.690348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.956 [2024-07-24 09:32:13.733580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:35.956 [2024-07-24 09:32:13.733639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74406' to capture a snapshot of events at runtime. 00:06:35.957 [2024-07-24 09:32:13.733656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.957 [2024-07-24 09:32:13.733671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.957 [2024-07-24 09:32:13.733681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74406 for offline analysis/debug. 00:06:35.957 [2024-07-24 09:32:13.733724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.522 09:32:14 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.522 09:32:14 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.522 09:32:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.522 09:32:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.522 09:32:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:36.522 09:32:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:36.522 09:32:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.522 09:32:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.522 09:32:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.522 ************************************ 00:06:36.522 START TEST rpc_integrity 00:06:36.522 ************************************ 00:06:36.522 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:36.522 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.522 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.522 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.522 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.522 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.522 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.781 { 00:06:36.781 "name": "Malloc0", 00:06:36.781 "aliases": [ 00:06:36.781 "01c2440a-9b6f-476b-875a-ca889b3fa16c" 00:06:36.781 ], 00:06:36.781 "product_name": "Malloc disk", 00:06:36.781 "block_size": 512, 00:06:36.781 "num_blocks": 16384, 00:06:36.781 "uuid": "01c2440a-9b6f-476b-875a-ca889b3fa16c", 00:06:36.781 "assigned_rate_limits": { 00:06:36.781 "rw_ios_per_sec": 0, 00:06:36.781 "rw_mbytes_per_sec": 0, 00:06:36.781 "r_mbytes_per_sec": 0, 00:06:36.781 "w_mbytes_per_sec": 0 00:06:36.781 }, 00:06:36.781 "claimed": false, 00:06:36.781 "zoned": false, 00:06:36.781 "supported_io_types": { 00:06:36.781 "read": true, 00:06:36.781 "write": true, 00:06:36.781 "unmap": true, 00:06:36.781 "flush": true, 00:06:36.781 "reset": true, 00:06:36.781 "nvme_admin": false, 00:06:36.781 "nvme_io": false, 00:06:36.781 "nvme_io_md": false, 00:06:36.781 "write_zeroes": true, 00:06:36.781 "zcopy": true, 00:06:36.781 "get_zone_info": false, 00:06:36.781 "zone_management": false, 00:06:36.781 "zone_append": false, 00:06:36.781 "compare": false, 00:06:36.781 "compare_and_write": false, 00:06:36.781 "abort": true, 00:06:36.781 "seek_hole": false, 00:06:36.781 "seek_data": false, 00:06:36.781 "copy": true, 00:06:36.781 "nvme_iov_md": false 00:06:36.781 }, 00:06:36.781 "memory_domains": [ 00:06:36.781 { 00:06:36.781 "dma_device_id": "system", 00:06:36.781 "dma_device_type": 1 00:06:36.781 }, 00:06:36.781 { 00:06:36.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.781 "dma_device_type": 2 00:06:36.781 } 00:06:36.781 ], 00:06:36.781 "driver_specific": {} 00:06:36.781 } 00:06:36.781 ]' 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.781 [2024-07-24 09:32:14.454399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.781 [2024-07-24 09:32:14.454460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.781 [2024-07-24 09:32:14.454489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:36.781 [2024-07-24 09:32:14.454510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.781 [2024-07-24 09:32:14.456969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.781 [2024-07-24 09:32:14.457033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.781 Passthru0 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.781 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.781 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.781 { 00:06:36.781 "name": "Malloc0", 00:06:36.781 "aliases": [ 00:06:36.781 "01c2440a-9b6f-476b-875a-ca889b3fa16c" 00:06:36.781 ], 00:06:36.781 "product_name": "Malloc disk", 00:06:36.781 "block_size": 512, 00:06:36.781 "num_blocks": 16384, 00:06:36.781 "uuid": "01c2440a-9b6f-476b-875a-ca889b3fa16c", 00:06:36.781 "assigned_rate_limits": { 00:06:36.782 "rw_ios_per_sec": 0, 00:06:36.782 "rw_mbytes_per_sec": 0, 00:06:36.782 "r_mbytes_per_sec": 0, 00:06:36.782 "w_mbytes_per_sec": 0 00:06:36.782 }, 00:06:36.782 "claimed": true, 00:06:36.782 "claim_type": "exclusive_write", 00:06:36.782 "zoned": false, 00:06:36.782 "supported_io_types": { 00:06:36.782 "read": true, 00:06:36.782 "write": true, 00:06:36.782 "unmap": true, 00:06:36.782 "flush": true, 00:06:36.782 "reset": true, 00:06:36.782 "nvme_admin": false, 00:06:36.782 "nvme_io": false, 00:06:36.782 "nvme_io_md": false, 00:06:36.782 "write_zeroes": true, 00:06:36.782 "zcopy": true, 00:06:36.782 "get_zone_info": false, 00:06:36.782 "zone_management": false, 00:06:36.782 "zone_append": false, 00:06:36.782 "compare": false, 00:06:36.782 "compare_and_write": false, 00:06:36.782 "abort": true, 00:06:36.782 "seek_hole": false, 00:06:36.782 "seek_data": false, 00:06:36.782 "copy": true, 00:06:36.782 "nvme_iov_md": false 00:06:36.782 }, 00:06:36.782 "memory_domains": [ 00:06:36.782 { 00:06:36.782 "dma_device_id": "system", 00:06:36.782 "dma_device_type": 1 00:06:36.782 }, 00:06:36.782 { 00:06:36.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.782 "dma_device_type": 2 00:06:36.782 } 00:06:36.782 ], 00:06:36.782 "driver_specific": {} 00:06:36.782 }, 00:06:36.782 { 00:06:36.782 "name": "Passthru0", 00:06:36.782 "aliases": [ 00:06:36.782 "1d7e1720-637b-5700-89e0-39c59c8f3ecf" 00:06:36.782 ], 00:06:36.782 "product_name": "passthru", 00:06:36.782 "block_size": 512, 00:06:36.782 "num_blocks": 16384, 00:06:36.782 "uuid": "1d7e1720-637b-5700-89e0-39c59c8f3ecf", 00:06:36.782 "assigned_rate_limits": { 00:06:36.782 "rw_ios_per_sec": 0, 00:06:36.782 "rw_mbytes_per_sec": 0, 00:06:36.782 "r_mbytes_per_sec": 0, 00:06:36.782 "w_mbytes_per_sec": 0 00:06:36.782 }, 00:06:36.782 "claimed": false, 00:06:36.782 "zoned": false, 00:06:36.782 "supported_io_types": { 00:06:36.782 "read": true, 00:06:36.782 "write": true, 00:06:36.782 "unmap": true, 00:06:36.782 "flush": true, 00:06:36.782 "reset": true, 00:06:36.782 "nvme_admin": false, 00:06:36.782 "nvme_io": false, 00:06:36.782 "nvme_io_md": false, 00:06:36.782 "write_zeroes": true, 00:06:36.782 "zcopy": true, 00:06:36.782 "get_zone_info": false, 00:06:36.782 "zone_management": false, 00:06:36.782 "zone_append": false, 00:06:36.782 "compare": false, 00:06:36.782 "compare_and_write": false, 00:06:36.782 "abort": true, 00:06:36.782 "seek_hole": false, 00:06:36.782 "seek_data": false, 00:06:36.782 "copy": true, 00:06:36.782 "nvme_iov_md": false 00:06:36.782 }, 00:06:36.782 "memory_domains": [ 00:06:36.782 { 00:06:36.782 "dma_device_id": "system", 00:06:36.782 "dma_device_type": 1 00:06:36.782 }, 00:06:36.782 { 00:06:36.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.782 "dma_device_type": 2 00:06:36.782 } 00:06:36.782 ], 00:06:36.782 "driver_specific": { 00:06:36.782 "passthru": { 00:06:36.782 "name": "Passthru0", 00:06:36.782 "base_bdev_name": "Malloc0" 00:06:36.782 } 00:06:36.782 } 00:06:36.782 } 00:06:36.782 ]' 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.782 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.782 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.041 ************************************ 00:06:37.041 END TEST rpc_integrity 00:06:37.041 ************************************ 00:06:37.041 09:32:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.041 00:06:37.041 real 0m0.301s 00:06:37.041 user 0m0.183s 00:06:37.041 sys 0m0.053s 00:06:37.041 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 09:32:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:37.041 09:32:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.041 09:32:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.041 09:32:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 ************************************ 00:06:37.041 START TEST rpc_plugins 00:06:37.041 ************************************ 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:37.041 { 00:06:37.041 "name": "Malloc1", 00:06:37.041 "aliases": [ 00:06:37.041 "ce4509c2-803c-4e95-ab5d-db7167aca6c1" 00:06:37.041 ], 00:06:37.041 "product_name": "Malloc disk", 00:06:37.041 "block_size": 4096, 00:06:37.041 "num_blocks": 256, 00:06:37.041 "uuid": "ce4509c2-803c-4e95-ab5d-db7167aca6c1", 00:06:37.041 "assigned_rate_limits": { 00:06:37.041 "rw_ios_per_sec": 0, 00:06:37.041 "rw_mbytes_per_sec": 0, 00:06:37.041 "r_mbytes_per_sec": 0, 00:06:37.041 "w_mbytes_per_sec": 0 00:06:37.041 }, 00:06:37.041 "claimed": false, 00:06:37.041 "zoned": false, 00:06:37.041 "supported_io_types": { 00:06:37.041 "read": true, 00:06:37.041 "write": true, 00:06:37.041 "unmap": true, 00:06:37.041 "flush": true, 00:06:37.041 "reset": true, 00:06:37.041 "nvme_admin": false, 00:06:37.041 "nvme_io": false, 00:06:37.041 "nvme_io_md": false, 00:06:37.041 "write_zeroes": true, 00:06:37.041 "zcopy": true, 00:06:37.041 "get_zone_info": false, 00:06:37.041 "zone_management": false, 00:06:37.041 "zone_append": false, 00:06:37.041 "compare": false, 00:06:37.041 "compare_and_write": false, 00:06:37.041 "abort": true, 00:06:37.041 "seek_hole": false, 00:06:37.041 "seek_data": false, 00:06:37.041 "copy": true, 00:06:37.041 "nvme_iov_md": false 00:06:37.041 }, 00:06:37.041 "memory_domains": [ 00:06:37.041 { 00:06:37.041 "dma_device_id": "system", 00:06:37.041 "dma_device_type": 1 00:06:37.041 }, 00:06:37.041 { 00:06:37.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.041 "dma_device_type": 2 00:06:37.041 } 00:06:37.041 ], 00:06:37.041 "driver_specific": {} 00:06:37.041 } 00:06:37.041 ]' 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:37.041 ************************************ 00:06:37.041 END TEST rpc_plugins 00:06:37.041 ************************************ 00:06:37.041 09:32:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:37.041 00:06:37.041 real 0m0.151s 00:06:37.041 user 0m0.097s 00:06:37.041 sys 0m0.021s 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.041 09:32:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.300 09:32:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:37.300 09:32:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.300 09:32:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.300 09:32:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.300 ************************************ 00:06:37.300 START TEST rpc_trace_cmd_test 00:06:37.300 ************************************ 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.300 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:37.300 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74406", 00:06:37.300 "tpoint_group_mask": "0x8", 00:06:37.300 "iscsi_conn": { 00:06:37.300 "mask": "0x2", 00:06:37.300 "tpoint_mask": "0x0" 00:06:37.300 }, 00:06:37.300 "scsi": { 00:06:37.300 "mask": "0x4", 00:06:37.300 "tpoint_mask": "0x0" 00:06:37.300 }, 00:06:37.300 "bdev": { 00:06:37.300 "mask": "0x8", 00:06:37.300 "tpoint_mask": "0xffffffffffffffff" 00:06:37.300 }, 00:06:37.300 "nvmf_rdma": { 00:06:37.300 "mask": "0x10", 00:06:37.300 "tpoint_mask": "0x0" 00:06:37.300 }, 00:06:37.300 "nvmf_tcp": { 00:06:37.300 "mask": "0x20", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "ftl": { 00:06:37.301 "mask": "0x40", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "blobfs": { 00:06:37.301 "mask": "0x80", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "dsa": { 00:06:37.301 "mask": "0x200", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "thread": { 00:06:37.301 "mask": "0x400", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "nvme_pcie": { 00:06:37.301 "mask": "0x800", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "iaa": { 00:06:37.301 "mask": "0x1000", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "nvme_tcp": { 00:06:37.301 "mask": "0x2000", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "bdev_nvme": { 00:06:37.301 "mask": "0x4000", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 }, 00:06:37.301 "sock": { 00:06:37.301 "mask": "0x8000", 00:06:37.301 "tpoint_mask": "0x0" 00:06:37.301 } 00:06:37.301 }' 00:06:37.301 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:37.301 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:37.301 09:32:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:37.301 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:37.560 ************************************ 00:06:37.560 END TEST rpc_trace_cmd_test 00:06:37.560 ************************************ 00:06:37.560 09:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:37.560 00:06:37.560 real 0m0.238s 00:06:37.560 user 0m0.191s 00:06:37.560 sys 0m0.037s 00:06:37.560 09:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.560 09:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.560 09:32:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:37.560 09:32:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:37.560 09:32:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:37.560 09:32:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.560 09:32:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.560 09:32:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.560 ************************************ 00:06:37.560 START TEST rpc_daemon_integrity 00:06:37.560 ************************************ 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.560 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.560 { 00:06:37.560 "name": "Malloc2", 00:06:37.561 "aliases": [ 00:06:37.561 "38fa2ddb-e591-429b-9671-8a0b209a32d4" 00:06:37.561 ], 00:06:37.561 "product_name": "Malloc disk", 00:06:37.561 "block_size": 512, 00:06:37.561 "num_blocks": 16384, 00:06:37.561 "uuid": "38fa2ddb-e591-429b-9671-8a0b209a32d4", 00:06:37.561 "assigned_rate_limits": { 00:06:37.561 "rw_ios_per_sec": 0, 00:06:37.561 "rw_mbytes_per_sec": 0, 00:06:37.561 "r_mbytes_per_sec": 0, 00:06:37.561 "w_mbytes_per_sec": 0 00:06:37.561 }, 00:06:37.561 "claimed": false, 00:06:37.561 "zoned": false, 00:06:37.561 "supported_io_types": { 00:06:37.561 "read": true, 00:06:37.561 "write": true, 00:06:37.561 "unmap": true, 00:06:37.561 "flush": true, 00:06:37.561 "reset": true, 00:06:37.561 "nvme_admin": false, 00:06:37.561 "nvme_io": false, 00:06:37.561 "nvme_io_md": false, 00:06:37.561 "write_zeroes": true, 00:06:37.561 "zcopy": true, 00:06:37.561 "get_zone_info": false, 00:06:37.561 "zone_management": false, 00:06:37.561 "zone_append": false, 00:06:37.561 "compare": false, 00:06:37.561 "compare_and_write": false, 00:06:37.561 "abort": true, 00:06:37.561 "seek_hole": false, 00:06:37.561 "seek_data": false, 00:06:37.561 "copy": true, 00:06:37.561 "nvme_iov_md": false 00:06:37.561 }, 00:06:37.561 "memory_domains": [ 00:06:37.561 { 00:06:37.561 "dma_device_id": "system", 00:06:37.561 "dma_device_type": 1 00:06:37.561 }, 00:06:37.561 { 00:06:37.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.561 "dma_device_type": 2 00:06:37.561 } 00:06:37.561 ], 00:06:37.561 "driver_specific": {} 00:06:37.561 } 00:06:37.561 ]' 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.561 [2024-07-24 09:32:15.322661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:37.561 [2024-07-24 09:32:15.322725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.561 [2024-07-24 09:32:15.322748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:37.561 [2024-07-24 09:32:15.322763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.561 [2024-07-24 09:32:15.325294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.561 [2024-07-24 09:32:15.325339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.561 Passthru0 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.561 { 00:06:37.561 "name": "Malloc2", 00:06:37.561 "aliases": [ 00:06:37.561 "38fa2ddb-e591-429b-9671-8a0b209a32d4" 00:06:37.561 ], 00:06:37.561 "product_name": "Malloc disk", 00:06:37.561 "block_size": 512, 00:06:37.561 "num_blocks": 16384, 00:06:37.561 "uuid": "38fa2ddb-e591-429b-9671-8a0b209a32d4", 00:06:37.561 "assigned_rate_limits": { 00:06:37.561 "rw_ios_per_sec": 0, 00:06:37.561 "rw_mbytes_per_sec": 0, 00:06:37.561 "r_mbytes_per_sec": 0, 00:06:37.561 "w_mbytes_per_sec": 0 00:06:37.561 }, 00:06:37.561 "claimed": true, 00:06:37.561 "claim_type": "exclusive_write", 00:06:37.561 "zoned": false, 00:06:37.561 "supported_io_types": { 00:06:37.561 "read": true, 00:06:37.561 "write": true, 00:06:37.561 "unmap": true, 00:06:37.561 "flush": true, 00:06:37.561 "reset": true, 00:06:37.561 "nvme_admin": false, 00:06:37.561 "nvme_io": false, 00:06:37.561 "nvme_io_md": false, 00:06:37.561 "write_zeroes": true, 00:06:37.561 "zcopy": true, 00:06:37.561 "get_zone_info": false, 00:06:37.561 "zone_management": false, 00:06:37.561 "zone_append": false, 00:06:37.561 "compare": false, 00:06:37.561 "compare_and_write": false, 00:06:37.561 "abort": true, 00:06:37.561 "seek_hole": false, 00:06:37.561 "seek_data": false, 00:06:37.561 "copy": true, 00:06:37.561 "nvme_iov_md": false 00:06:37.561 }, 00:06:37.561 "memory_domains": [ 00:06:37.561 { 00:06:37.561 "dma_device_id": "system", 00:06:37.561 "dma_device_type": 1 00:06:37.561 }, 00:06:37.561 { 00:06:37.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.561 "dma_device_type": 2 00:06:37.561 } 00:06:37.561 ], 00:06:37.561 "driver_specific": {} 00:06:37.561 }, 00:06:37.561 { 00:06:37.561 "name": "Passthru0", 00:06:37.561 "aliases": [ 00:06:37.561 "fc1c4e43-e1b7-5667-a5cf-d66bb3f7b5b4" 00:06:37.561 ], 00:06:37.561 "product_name": "passthru", 00:06:37.561 "block_size": 512, 00:06:37.561 "num_blocks": 16384, 00:06:37.561 "uuid": "fc1c4e43-e1b7-5667-a5cf-d66bb3f7b5b4", 00:06:37.561 "assigned_rate_limits": { 00:06:37.561 "rw_ios_per_sec": 0, 00:06:37.561 "rw_mbytes_per_sec": 0, 00:06:37.561 "r_mbytes_per_sec": 0, 00:06:37.561 "w_mbytes_per_sec": 0 00:06:37.561 }, 00:06:37.561 "claimed": false, 00:06:37.561 "zoned": false, 00:06:37.561 "supported_io_types": { 00:06:37.561 "read": true, 00:06:37.561 "write": true, 00:06:37.561 "unmap": true, 00:06:37.561 "flush": true, 00:06:37.561 "reset": true, 00:06:37.561 "nvme_admin": false, 00:06:37.561 "nvme_io": false, 00:06:37.561 "nvme_io_md": false, 00:06:37.561 "write_zeroes": true, 00:06:37.561 "zcopy": true, 00:06:37.561 "get_zone_info": false, 00:06:37.561 "zone_management": false, 00:06:37.561 "zone_append": false, 00:06:37.561 "compare": false, 00:06:37.561 "compare_and_write": false, 00:06:37.561 "abort": true, 00:06:37.561 "seek_hole": false, 00:06:37.561 "seek_data": false, 00:06:37.561 "copy": true, 00:06:37.561 "nvme_iov_md": false 00:06:37.561 }, 00:06:37.561 "memory_domains": [ 00:06:37.561 { 00:06:37.561 "dma_device_id": "system", 00:06:37.561 "dma_device_type": 1 00:06:37.561 }, 00:06:37.561 { 00:06:37.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.561 "dma_device_type": 2 00:06:37.561 } 00:06:37.561 ], 00:06:37.561 "driver_specific": { 00:06:37.561 "passthru": { 00:06:37.561 "name": "Passthru0", 00:06:37.561 "base_bdev_name": "Malloc2" 00:06:37.561 } 00:06:37.561 } 00:06:37.561 } 00:06:37.561 ]' 00:06:37.561 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.820 ************************************ 00:06:37.820 END TEST rpc_daemon_integrity 00:06:37.820 ************************************ 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.820 00:06:37.820 real 0m0.293s 00:06:37.820 user 0m0.171s 00:06:37.820 sys 0m0.058s 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.820 09:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.820 09:32:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.820 09:32:15 rpc -- rpc/rpc.sh@84 -- # killprocess 74406 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@950 -- # '[' -z 74406 ']' 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@954 -- # kill -0 74406 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74406 00:06:37.820 killing process with pid 74406 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74406' 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@969 -- # kill 74406 00:06:37.820 09:32:15 rpc -- common/autotest_common.sh@974 -- # wait 74406 00:06:38.385 00:06:38.385 real 0m2.686s 00:06:38.385 user 0m3.182s 00:06:38.385 sys 0m0.855s 00:06:38.386 09:32:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.386 09:32:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.386 ************************************ 00:06:38.386 END TEST rpc 00:06:38.386 ************************************ 00:06:38.386 09:32:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.386 09:32:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.386 09:32:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.386 09:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:38.386 ************************************ 00:06:38.386 START TEST skip_rpc 00:06:38.386 ************************************ 00:06:38.386 09:32:16 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.386 * Looking for test storage... 00:06:38.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:38.386 09:32:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:38.386 09:32:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:38.386 09:32:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:38.386 09:32:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.386 09:32:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.386 09:32:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.386 ************************************ 00:06:38.386 START TEST skip_rpc 00:06:38.386 ************************************ 00:06:38.386 09:32:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:38.386 09:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74599 00:06:38.386 09:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:38.386 09:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.386 09:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:38.644 [2024-07-24 09:32:16.271653] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:38.644 [2024-07-24 09:32:16.271941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74599 ] 00:06:38.644 [2024-07-24 09:32:16.433745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.902 [2024-07-24 09:32:16.478376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74599 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 74599 ']' 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 74599 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74599 00:06:44.168 killing process with pid 74599 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74599' 00:06:44.168 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 74599 00:06:44.169 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 74599 00:06:44.169 ************************************ 00:06:44.169 END TEST skip_rpc 00:06:44.169 ************************************ 00:06:44.169 00:06:44.169 real 0m5.427s 00:06:44.169 user 0m4.987s 00:06:44.169 sys 0m0.344s 00:06:44.169 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.169 09:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.169 09:32:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:44.169 09:32:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.169 09:32:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.169 09:32:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.169 ************************************ 00:06:44.169 START TEST skip_rpc_with_json 00:06:44.169 ************************************ 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74687 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74687 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 74687 ']' 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.169 09:32:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.169 [2024-07-24 09:32:21.770447] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:44.169 [2024-07-24 09:32:21.770586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74687 ] 00:06:44.169 [2024-07-24 09:32:21.936821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.169 [2024-07-24 09:32:21.981692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.103 [2024-07-24 09:32:22.569358] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:45.103 request: 00:06:45.103 { 00:06:45.103 "trtype": "tcp", 00:06:45.103 "method": "nvmf_get_transports", 00:06:45.103 "req_id": 1 00:06:45.103 } 00:06:45.103 Got JSON-RPC error response 00:06:45.103 response: 00:06:45.103 { 00:06:45.103 "code": -19, 00:06:45.103 "message": "No such device" 00:06:45.103 } 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.103 [2024-07-24 09:32:22.582044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.103 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.103 { 00:06:45.103 "subsystems": [ 00:06:45.103 { 00:06:45.103 "subsystem": "keyring", 00:06:45.103 "config": [] 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "subsystem": "iobuf", 00:06:45.103 "config": [ 00:06:45.103 { 00:06:45.103 "method": "iobuf_set_options", 00:06:45.103 "params": { 00:06:45.103 "small_pool_count": 8192, 00:06:45.103 "large_pool_count": 1024, 00:06:45.103 "small_bufsize": 8192, 00:06:45.103 "large_bufsize": 135168 00:06:45.103 } 00:06:45.103 } 00:06:45.103 ] 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "subsystem": "sock", 00:06:45.103 "config": [ 00:06:45.103 { 00:06:45.103 "method": "sock_set_default_impl", 00:06:45.103 "params": { 00:06:45.103 "impl_name": "posix" 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "sock_impl_set_options", 00:06:45.103 "params": { 00:06:45.103 "impl_name": "ssl", 00:06:45.103 "recv_buf_size": 4096, 00:06:45.103 "send_buf_size": 4096, 00:06:45.103 "enable_recv_pipe": true, 00:06:45.103 "enable_quickack": false, 00:06:45.103 "enable_placement_id": 0, 00:06:45.103 "enable_zerocopy_send_server": true, 00:06:45.103 "enable_zerocopy_send_client": false, 00:06:45.103 "zerocopy_threshold": 0, 00:06:45.103 "tls_version": 0, 00:06:45.103 "enable_ktls": false 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "sock_impl_set_options", 00:06:45.103 "params": { 00:06:45.103 "impl_name": "posix", 00:06:45.103 "recv_buf_size": 2097152, 00:06:45.103 "send_buf_size": 2097152, 00:06:45.103 "enable_recv_pipe": true, 00:06:45.103 "enable_quickack": false, 00:06:45.103 "enable_placement_id": 0, 00:06:45.103 "enable_zerocopy_send_server": true, 00:06:45.103 "enable_zerocopy_send_client": false, 00:06:45.103 "zerocopy_threshold": 0, 00:06:45.103 "tls_version": 0, 00:06:45.103 "enable_ktls": false 00:06:45.103 } 00:06:45.103 } 00:06:45.103 ] 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "subsystem": "vmd", 00:06:45.103 "config": [] 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "subsystem": "accel", 00:06:45.103 "config": [ 00:06:45.103 { 00:06:45.103 "method": "accel_set_options", 00:06:45.103 "params": { 00:06:45.103 "small_cache_size": 128, 00:06:45.103 "large_cache_size": 16, 00:06:45.103 "task_count": 2048, 00:06:45.103 "sequence_count": 2048, 00:06:45.103 "buf_count": 2048 00:06:45.103 } 00:06:45.103 } 00:06:45.103 ] 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "subsystem": "bdev", 00:06:45.103 "config": [ 00:06:45.103 { 00:06:45.103 "method": "bdev_set_options", 00:06:45.103 "params": { 00:06:45.103 "bdev_io_pool_size": 65535, 00:06:45.103 "bdev_io_cache_size": 256, 00:06:45.103 "bdev_auto_examine": true, 00:06:45.103 "iobuf_small_cache_size": 128, 00:06:45.103 "iobuf_large_cache_size": 16 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "bdev_raid_set_options", 00:06:45.103 "params": { 00:06:45.103 "process_window_size_kb": 1024, 00:06:45.103 "process_max_bandwidth_mb_sec": 0 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "bdev_iscsi_set_options", 00:06:45.103 "params": { 00:06:45.103 "timeout_sec": 30 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "bdev_nvme_set_options", 00:06:45.103 "params": { 00:06:45.103 "action_on_timeout": "none", 00:06:45.103 "timeout_us": 0, 00:06:45.103 "timeout_admin_us": 0, 00:06:45.103 "keep_alive_timeout_ms": 10000, 00:06:45.103 "arbitration_burst": 0, 00:06:45.103 "low_priority_weight": 0, 00:06:45.103 "medium_priority_weight": 0, 00:06:45.103 "high_priority_weight": 0, 00:06:45.103 "nvme_adminq_poll_period_us": 10000, 00:06:45.103 "nvme_ioq_poll_period_us": 0, 00:06:45.103 "io_queue_requests": 0, 00:06:45.103 "delay_cmd_submit": true, 00:06:45.103 "transport_retry_count": 4, 00:06:45.103 "bdev_retry_count": 3, 00:06:45.103 "transport_ack_timeout": 0, 00:06:45.103 "ctrlr_loss_timeout_sec": 0, 00:06:45.103 "reconnect_delay_sec": 0, 00:06:45.103 "fast_io_fail_timeout_sec": 0, 00:06:45.103 "disable_auto_failback": false, 00:06:45.103 "generate_uuids": false, 00:06:45.103 "transport_tos": 0, 00:06:45.103 "nvme_error_stat": false, 00:06:45.103 "rdma_srq_size": 0, 00:06:45.103 "io_path_stat": false, 00:06:45.103 "allow_accel_sequence": false, 00:06:45.103 "rdma_max_cq_size": 0, 00:06:45.103 "rdma_cm_event_timeout_ms": 0, 00:06:45.103 "dhchap_digests": [ 00:06:45.103 "sha256", 00:06:45.103 "sha384", 00:06:45.103 "sha512" 00:06:45.103 ], 00:06:45.103 "dhchap_dhgroups": [ 00:06:45.103 "null", 00:06:45.103 "ffdhe2048", 00:06:45.103 "ffdhe3072", 00:06:45.103 "ffdhe4096", 00:06:45.103 "ffdhe6144", 00:06:45.103 "ffdhe8192" 00:06:45.103 ] 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.103 "method": "bdev_nvme_set_hotplug", 00:06:45.103 "params": { 00:06:45.103 "period_us": 100000, 00:06:45.103 "enable": false 00:06:45.103 } 00:06:45.103 }, 00:06:45.103 { 00:06:45.104 "method": "bdev_wait_for_examine" 00:06:45.104 } 00:06:45.104 ] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "scsi", 00:06:45.104 "config": null 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "scheduler", 00:06:45.104 "config": [ 00:06:45.104 { 00:06:45.104 "method": "framework_set_scheduler", 00:06:45.104 "params": { 00:06:45.104 "name": "static" 00:06:45.104 } 00:06:45.104 } 00:06:45.104 ] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "vhost_scsi", 00:06:45.104 "config": [] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "vhost_blk", 00:06:45.104 "config": [] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "ublk", 00:06:45.104 "config": [] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "nbd", 00:06:45.104 "config": [] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "nvmf", 00:06:45.104 "config": [ 00:06:45.104 { 00:06:45.104 "method": "nvmf_set_config", 00:06:45.104 "params": { 00:06:45.104 "discovery_filter": "match_any", 00:06:45.104 "admin_cmd_passthru": { 00:06:45.104 "identify_ctrlr": false 00:06:45.104 } 00:06:45.104 } 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "method": "nvmf_set_max_subsystems", 00:06:45.104 "params": { 00:06:45.104 "max_subsystems": 1024 00:06:45.104 } 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "method": "nvmf_set_crdt", 00:06:45.104 "params": { 00:06:45.104 "crdt1": 0, 00:06:45.104 "crdt2": 0, 00:06:45.104 "crdt3": 0 00:06:45.104 } 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "method": "nvmf_create_transport", 00:06:45.104 "params": { 00:06:45.104 "trtype": "TCP", 00:06:45.104 "max_queue_depth": 128, 00:06:45.104 "max_io_qpairs_per_ctrlr": 127, 00:06:45.104 "in_capsule_data_size": 4096, 00:06:45.104 "max_io_size": 131072, 00:06:45.104 "io_unit_size": 131072, 00:06:45.104 "max_aq_depth": 128, 00:06:45.104 "num_shared_buffers": 511, 00:06:45.104 "buf_cache_size": 4294967295, 00:06:45.104 "dif_insert_or_strip": false, 00:06:45.104 "zcopy": false, 00:06:45.104 "c2h_success": true, 00:06:45.104 "sock_priority": 0, 00:06:45.104 "abort_timeout_sec": 1, 00:06:45.104 "ack_timeout": 0, 00:06:45.104 "data_wr_pool_size": 0 00:06:45.104 } 00:06:45.104 } 00:06:45.104 ] 00:06:45.104 }, 00:06:45.104 { 00:06:45.104 "subsystem": "iscsi", 00:06:45.104 "config": [ 00:06:45.104 { 00:06:45.104 "method": "iscsi_set_options", 00:06:45.104 "params": { 00:06:45.104 "node_base": "iqn.2016-06.io.spdk", 00:06:45.104 "max_sessions": 128, 00:06:45.104 "max_connections_per_session": 2, 00:06:45.104 "max_queue_depth": 64, 00:06:45.104 "default_time2wait": 2, 00:06:45.104 "default_time2retain": 20, 00:06:45.104 "first_burst_length": 8192, 00:06:45.104 "immediate_data": true, 00:06:45.104 "allow_duplicated_isid": false, 00:06:45.104 "error_recovery_level": 0, 00:06:45.104 "nop_timeout": 60, 00:06:45.104 "nop_in_interval": 30, 00:06:45.104 "disable_chap": false, 00:06:45.104 "require_chap": false, 00:06:45.104 "mutual_chap": false, 00:06:45.104 "chap_group": 0, 00:06:45.104 "max_large_datain_per_connection": 64, 00:06:45.104 "max_r2t_per_connection": 4, 00:06:45.104 "pdu_pool_size": 36864, 00:06:45.104 "immediate_data_pool_size": 16384, 00:06:45.104 "data_out_pool_size": 2048 00:06:45.104 } 00:06:45.104 } 00:06:45.104 ] 00:06:45.104 } 00:06:45.104 ] 00:06:45.104 } 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74687 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 74687 ']' 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 74687 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74687 00:06:45.104 killing process with pid 74687 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74687' 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 74687 00:06:45.104 09:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 74687 00:06:45.673 09:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74710 00:06:45.673 09:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.673 09:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74710 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 74710 ']' 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 74710 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74710 00:06:50.935 killing process with pid 74710 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74710' 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 74710 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 74710 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.935 00:06:50.935 real 0m6.948s 00:06:50.935 user 0m6.418s 00:06:50.935 sys 0m0.799s 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:50.935 ************************************ 00:06:50.935 END TEST skip_rpc_with_json 00:06:50.935 ************************************ 00:06:50.935 09:32:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:50.935 09:32:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.935 09:32:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.935 09:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.935 ************************************ 00:06:50.935 START TEST skip_rpc_with_delay 00:06:50.935 ************************************ 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:50.935 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.193 [2024-07-24 09:32:28.788177] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:51.193 [2024-07-24 09:32:28.788336] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:51.193 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:51.194 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.194 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.194 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.194 00:06:51.194 real 0m0.170s 00:06:51.194 user 0m0.086s 00:06:51.194 sys 0m0.083s 00:06:51.194 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.194 ************************************ 00:06:51.194 END TEST skip_rpc_with_delay 00:06:51.194 09:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 ************************************ 00:06:51.194 09:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:51.194 09:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:51.194 09:32:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:51.194 09:32:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.194 09:32:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.194 09:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 ************************************ 00:06:51.194 START TEST exit_on_failed_rpc_init 00:06:51.194 ************************************ 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74829 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74829 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 74829 ']' 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.194 09:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.451 [2024-07-24 09:32:29.029317] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:51.451 [2024-07-24 09:32:29.029454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74829 ] 00:06:51.451 [2024-07-24 09:32:29.192313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.451 [2024-07-24 09:32:29.248164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:52.087 09:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.345 [2024-07-24 09:32:29.925728] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:52.345 [2024-07-24 09:32:29.925879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74847 ] 00:06:52.345 [2024-07-24 09:32:30.094852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.345 [2024-07-24 09:32:30.141637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.345 [2024-07-24 09:32:30.141737] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:52.345 [2024-07-24 09:32:30.141762] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:52.345 [2024-07-24 09:32:30.141783] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74829 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 74829 ']' 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 74829 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74829 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.604 killing process with pid 74829 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74829' 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 74829 00:06:52.604 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 74829 00:06:52.905 00:06:52.905 real 0m1.742s 00:06:52.905 user 0m1.803s 00:06:52.905 sys 0m0.568s 00:06:52.905 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.905 09:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:52.905 ************************************ 00:06:52.905 END TEST exit_on_failed_rpc_init 00:06:52.905 ************************************ 00:06:53.164 09:32:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:53.164 ************************************ 00:06:53.164 END TEST skip_rpc 00:06:53.164 ************************************ 00:06:53.164 00:06:53.164 real 0m14.687s 00:06:53.164 user 0m13.430s 00:06:53.164 sys 0m2.051s 00:06:53.164 09:32:30 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.164 09:32:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 09:32:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.164 09:32:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.164 09:32:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.164 09:32:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 ************************************ 00:06:53.164 START TEST rpc_client 00:06:53.164 ************************************ 00:06:53.164 09:32:30 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.164 * Looking for test storage... 00:06:53.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:53.164 09:32:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:53.164 OK 00:06:53.164 09:32:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:53.164 00:06:53.164 real 0m0.183s 00:06:53.164 user 0m0.089s 00:06:53.164 sys 0m0.104s 00:06:53.164 09:32:30 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.164 09:32:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 ************************************ 00:06:53.164 END TEST rpc_client 00:06:53.164 ************************************ 00:06:53.423 09:32:31 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.423 09:32:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.423 09:32:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.423 09:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 ************************************ 00:06:53.423 START TEST json_config 00:06:53.423 ************************************ 00:06:53.423 09:32:31 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a49f168b-eb54-4929-b45f-50a5185dc78e 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a49f168b-eb54-4929-b45f-50a5185dc78e 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.423 09:32:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.423 09:32:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.423 09:32:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.423 09:32:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.423 09:32:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.423 09:32:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.423 09:32:31 json_config -- paths/export.sh@5 -- # export PATH 00:06:53.423 09:32:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@47 -- # : 0 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.423 09:32:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:53.423 WARNING: No tests are enabled so not running JSON configuration tests 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:53.423 09:32:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:53.423 00:06:53.423 real 0m0.120s 00:06:53.423 user 0m0.059s 00:06:53.423 sys 0m0.062s 00:06:53.423 09:32:31 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.423 09:32:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 ************************************ 00:06:53.423 END TEST json_config 00:06:53.423 ************************************ 00:06:53.423 09:32:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.423 09:32:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.423 09:32:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.423 09:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 ************************************ 00:06:53.423 START TEST json_config_extra_key 00:06:53.423 ************************************ 00:06:53.423 09:32:31 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.682 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a49f168b-eb54-4929-b45f-50a5185dc78e 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a49f168b-eb54-4929-b45f-50a5185dc78e 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.682 09:32:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.682 09:32:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.682 09:32:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.682 09:32:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.682 09:32:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.683 09:32:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.683 09:32:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.683 09:32:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:53.683 09:32:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.683 09:32:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:53.683 INFO: launching applications... 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:53.683 09:32:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.683 Waiting for target to run... 00:06:53.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=75000 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 75000 /var/tmp/spdk_tgt.sock 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 75000 ']' 00:06:53.683 09:32:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.683 09:32:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.683 [2024-07-24 09:32:31.449866] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:53.683 [2024-07-24 09:32:31.450431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75000 ] 00:06:54.250 [2024-07-24 09:32:31.836406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.250 [2024-07-24 09:32:31.864247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.508 09:32:32 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.508 09:32:32 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:54.508 00:06:54.508 INFO: shutting down applications... 00:06:54.508 09:32:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:54.508 09:32:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:54.508 09:32:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 75000 ]] 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 75000 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75000 00:06:54.509 09:32:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75000 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:55.075 SPDK target shutdown done 00:06:55.075 Success 00:06:55.075 09:32:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:55.075 09:32:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:55.075 00:06:55.075 real 0m1.513s 00:06:55.075 user 0m1.222s 00:06:55.075 sys 0m0.471s 00:06:55.075 09:32:32 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.075 ************************************ 00:06:55.075 END TEST json_config_extra_key 00:06:55.075 ************************************ 00:06:55.075 09:32:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:55.075 09:32:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.075 09:32:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.075 09:32:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.075 09:32:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.075 ************************************ 00:06:55.075 START TEST alias_rpc 00:06:55.075 ************************************ 00:06:55.075 09:32:32 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.332 * Looking for test storage... 00:06:55.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:55.332 09:32:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.332 09:32:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=75067 00:06:55.332 09:32:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 75067 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 75067 ']' 00:06:55.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.332 09:32:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.332 09:32:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 [2024-07-24 09:32:33.042072] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:55.332 [2024-07-24 09:32:33.042246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75067 ] 00:06:55.588 [2024-07-24 09:32:33.216233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.588 [2024-07-24 09:32:33.263183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.149 09:32:33 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.149 09:32:33 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.149 09:32:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:56.407 09:32:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 75067 00:06:56.407 09:32:34 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 75067 ']' 00:06:56.407 09:32:34 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 75067 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75067 00:06:56.408 killing process with pid 75067 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75067' 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 75067 00:06:56.408 09:32:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 75067 00:06:56.975 ************************************ 00:06:56.975 END TEST alias_rpc 00:06:56.975 ************************************ 00:06:56.975 00:06:56.975 real 0m1.712s 00:06:56.975 user 0m1.709s 00:06:56.975 sys 0m0.537s 00:06:56.975 09:32:34 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.975 09:32:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.975 09:32:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:56.975 09:32:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:56.975 09:32:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.975 09:32:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.975 09:32:34 -- common/autotest_common.sh@10 -- # set +x 00:06:56.975 ************************************ 00:06:56.975 START TEST spdkcli_tcp 00:06:56.975 ************************************ 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:56.975 * Looking for test storage... 00:06:56.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=75143 00:06:56.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 75143 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 75143 ']' 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.975 09:32:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:56.975 09:32:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.234 [2024-07-24 09:32:34.828162] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:57.234 [2024-07-24 09:32:34.828305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75143 ] 00:06:57.234 [2024-07-24 09:32:34.999107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.234 [2024-07-24 09:32:35.045092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.234 [2024-07-24 09:32:35.045182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.171 09:32:35 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.171 09:32:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:58.171 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=75155 00:06:58.171 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.171 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:58.171 [ 00:06:58.171 "bdev_malloc_delete", 00:06:58.171 "bdev_malloc_create", 00:06:58.171 "bdev_null_resize", 00:06:58.171 "bdev_null_delete", 00:06:58.171 "bdev_null_create", 00:06:58.171 "bdev_nvme_cuse_unregister", 00:06:58.171 "bdev_nvme_cuse_register", 00:06:58.171 "bdev_opal_new_user", 00:06:58.171 "bdev_opal_set_lock_state", 00:06:58.171 "bdev_opal_delete", 00:06:58.171 "bdev_opal_get_info", 00:06:58.171 "bdev_opal_create", 00:06:58.171 "bdev_nvme_opal_revert", 00:06:58.171 "bdev_nvme_opal_init", 00:06:58.171 "bdev_nvme_send_cmd", 00:06:58.171 "bdev_nvme_get_path_iostat", 00:06:58.171 "bdev_nvme_get_mdns_discovery_info", 00:06:58.171 "bdev_nvme_stop_mdns_discovery", 00:06:58.171 "bdev_nvme_start_mdns_discovery", 00:06:58.171 "bdev_nvme_set_multipath_policy", 00:06:58.171 "bdev_nvme_set_preferred_path", 00:06:58.171 "bdev_nvme_get_io_paths", 00:06:58.171 "bdev_nvme_remove_error_injection", 00:06:58.171 "bdev_nvme_add_error_injection", 00:06:58.171 "bdev_nvme_get_discovery_info", 00:06:58.171 "bdev_nvme_stop_discovery", 00:06:58.171 "bdev_nvme_start_discovery", 00:06:58.171 "bdev_nvme_get_controller_health_info", 00:06:58.171 "bdev_nvme_disable_controller", 00:06:58.171 "bdev_nvme_enable_controller", 00:06:58.171 "bdev_nvme_reset_controller", 00:06:58.171 "bdev_nvme_get_transport_statistics", 00:06:58.171 "bdev_nvme_apply_firmware", 00:06:58.171 "bdev_nvme_detach_controller", 00:06:58.171 "bdev_nvme_get_controllers", 00:06:58.171 "bdev_nvme_attach_controller", 00:06:58.171 "bdev_nvme_set_hotplug", 00:06:58.171 "bdev_nvme_set_options", 00:06:58.171 "bdev_passthru_delete", 00:06:58.171 "bdev_passthru_create", 00:06:58.171 "bdev_lvol_set_parent_bdev", 00:06:58.171 "bdev_lvol_set_parent", 00:06:58.171 "bdev_lvol_check_shallow_copy", 00:06:58.171 "bdev_lvol_start_shallow_copy", 00:06:58.171 "bdev_lvol_grow_lvstore", 00:06:58.171 "bdev_lvol_get_lvols", 00:06:58.171 "bdev_lvol_get_lvstores", 00:06:58.171 "bdev_lvol_delete", 00:06:58.171 "bdev_lvol_set_read_only", 00:06:58.171 "bdev_lvol_resize", 00:06:58.171 "bdev_lvol_decouple_parent", 00:06:58.171 "bdev_lvol_inflate", 00:06:58.171 "bdev_lvol_rename", 00:06:58.171 "bdev_lvol_clone_bdev", 00:06:58.171 "bdev_lvol_clone", 00:06:58.171 "bdev_lvol_snapshot", 00:06:58.171 "bdev_lvol_create", 00:06:58.171 "bdev_lvol_delete_lvstore", 00:06:58.171 "bdev_lvol_rename_lvstore", 00:06:58.171 "bdev_lvol_create_lvstore", 00:06:58.171 "bdev_raid_set_options", 00:06:58.171 "bdev_raid_remove_base_bdev", 00:06:58.171 "bdev_raid_add_base_bdev", 00:06:58.171 "bdev_raid_delete", 00:06:58.171 "bdev_raid_create", 00:06:58.171 "bdev_raid_get_bdevs", 00:06:58.171 "bdev_error_inject_error", 00:06:58.171 "bdev_error_delete", 00:06:58.171 "bdev_error_create", 00:06:58.171 "bdev_split_delete", 00:06:58.171 "bdev_split_create", 00:06:58.171 "bdev_delay_delete", 00:06:58.171 "bdev_delay_create", 00:06:58.171 "bdev_delay_update_latency", 00:06:58.171 "bdev_zone_block_delete", 00:06:58.171 "bdev_zone_block_create", 00:06:58.171 "blobfs_create", 00:06:58.171 "blobfs_detect", 00:06:58.171 "blobfs_set_cache_size", 00:06:58.171 "bdev_xnvme_delete", 00:06:58.171 "bdev_xnvme_create", 00:06:58.171 "bdev_aio_delete", 00:06:58.171 "bdev_aio_rescan", 00:06:58.171 "bdev_aio_create", 00:06:58.171 "bdev_ftl_set_property", 00:06:58.171 "bdev_ftl_get_properties", 00:06:58.171 "bdev_ftl_get_stats", 00:06:58.171 "bdev_ftl_unmap", 00:06:58.171 "bdev_ftl_unload", 00:06:58.171 "bdev_ftl_delete", 00:06:58.171 "bdev_ftl_load", 00:06:58.171 "bdev_ftl_create", 00:06:58.171 "bdev_virtio_attach_controller", 00:06:58.171 "bdev_virtio_scsi_get_devices", 00:06:58.171 "bdev_virtio_detach_controller", 00:06:58.171 "bdev_virtio_blk_set_hotplug", 00:06:58.171 "bdev_iscsi_delete", 00:06:58.171 "bdev_iscsi_create", 00:06:58.171 "bdev_iscsi_set_options", 00:06:58.171 "accel_error_inject_error", 00:06:58.171 "ioat_scan_accel_module", 00:06:58.171 "dsa_scan_accel_module", 00:06:58.171 "iaa_scan_accel_module", 00:06:58.171 "keyring_file_remove_key", 00:06:58.171 "keyring_file_add_key", 00:06:58.171 "keyring_linux_set_options", 00:06:58.171 "iscsi_get_histogram", 00:06:58.171 "iscsi_enable_histogram", 00:06:58.171 "iscsi_set_options", 00:06:58.171 "iscsi_get_auth_groups", 00:06:58.171 "iscsi_auth_group_remove_secret", 00:06:58.171 "iscsi_auth_group_add_secret", 00:06:58.171 "iscsi_delete_auth_group", 00:06:58.171 "iscsi_create_auth_group", 00:06:58.171 "iscsi_set_discovery_auth", 00:06:58.171 "iscsi_get_options", 00:06:58.171 "iscsi_target_node_request_logout", 00:06:58.171 "iscsi_target_node_set_redirect", 00:06:58.171 "iscsi_target_node_set_auth", 00:06:58.171 "iscsi_target_node_add_lun", 00:06:58.171 "iscsi_get_stats", 00:06:58.171 "iscsi_get_connections", 00:06:58.171 "iscsi_portal_group_set_auth", 00:06:58.171 "iscsi_start_portal_group", 00:06:58.171 "iscsi_delete_portal_group", 00:06:58.171 "iscsi_create_portal_group", 00:06:58.171 "iscsi_get_portal_groups", 00:06:58.171 "iscsi_delete_target_node", 00:06:58.171 "iscsi_target_node_remove_pg_ig_maps", 00:06:58.171 "iscsi_target_node_add_pg_ig_maps", 00:06:58.171 "iscsi_create_target_node", 00:06:58.171 "iscsi_get_target_nodes", 00:06:58.171 "iscsi_delete_initiator_group", 00:06:58.171 "iscsi_initiator_group_remove_initiators", 00:06:58.171 "iscsi_initiator_group_add_initiators", 00:06:58.171 "iscsi_create_initiator_group", 00:06:58.171 "iscsi_get_initiator_groups", 00:06:58.171 "nvmf_set_crdt", 00:06:58.171 "nvmf_set_config", 00:06:58.171 "nvmf_set_max_subsystems", 00:06:58.171 "nvmf_stop_mdns_prr", 00:06:58.171 "nvmf_publish_mdns_prr", 00:06:58.171 "nvmf_subsystem_get_listeners", 00:06:58.171 "nvmf_subsystem_get_qpairs", 00:06:58.171 "nvmf_subsystem_get_controllers", 00:06:58.171 "nvmf_get_stats", 00:06:58.171 "nvmf_get_transports", 00:06:58.171 "nvmf_create_transport", 00:06:58.171 "nvmf_get_targets", 00:06:58.171 "nvmf_delete_target", 00:06:58.171 "nvmf_create_target", 00:06:58.171 "nvmf_subsystem_allow_any_host", 00:06:58.171 "nvmf_subsystem_remove_host", 00:06:58.171 "nvmf_subsystem_add_host", 00:06:58.171 "nvmf_ns_remove_host", 00:06:58.171 "nvmf_ns_add_host", 00:06:58.171 "nvmf_subsystem_remove_ns", 00:06:58.171 "nvmf_subsystem_add_ns", 00:06:58.171 "nvmf_subsystem_listener_set_ana_state", 00:06:58.171 "nvmf_discovery_get_referrals", 00:06:58.171 "nvmf_discovery_remove_referral", 00:06:58.171 "nvmf_discovery_add_referral", 00:06:58.171 "nvmf_subsystem_remove_listener", 00:06:58.171 "nvmf_subsystem_add_listener", 00:06:58.171 "nvmf_delete_subsystem", 00:06:58.171 "nvmf_create_subsystem", 00:06:58.171 "nvmf_get_subsystems", 00:06:58.171 "env_dpdk_get_mem_stats", 00:06:58.171 "nbd_get_disks", 00:06:58.171 "nbd_stop_disk", 00:06:58.171 "nbd_start_disk", 00:06:58.171 "ublk_recover_disk", 00:06:58.171 "ublk_get_disks", 00:06:58.171 "ublk_stop_disk", 00:06:58.171 "ublk_start_disk", 00:06:58.171 "ublk_destroy_target", 00:06:58.171 "ublk_create_target", 00:06:58.171 "virtio_blk_create_transport", 00:06:58.171 "virtio_blk_get_transports", 00:06:58.171 "vhost_controller_set_coalescing", 00:06:58.171 "vhost_get_controllers", 00:06:58.171 "vhost_delete_controller", 00:06:58.171 "vhost_create_blk_controller", 00:06:58.171 "vhost_scsi_controller_remove_target", 00:06:58.171 "vhost_scsi_controller_add_target", 00:06:58.171 "vhost_start_scsi_controller", 00:06:58.171 "vhost_create_scsi_controller", 00:06:58.172 "thread_set_cpumask", 00:06:58.172 "framework_get_governor", 00:06:58.172 "framework_get_scheduler", 00:06:58.172 "framework_set_scheduler", 00:06:58.172 "framework_get_reactors", 00:06:58.172 "thread_get_io_channels", 00:06:58.172 "thread_get_pollers", 00:06:58.172 "thread_get_stats", 00:06:58.172 "framework_monitor_context_switch", 00:06:58.172 "spdk_kill_instance", 00:06:58.172 "log_enable_timestamps", 00:06:58.172 "log_get_flags", 00:06:58.172 "log_clear_flag", 00:06:58.172 "log_set_flag", 00:06:58.172 "log_get_level", 00:06:58.172 "log_set_level", 00:06:58.172 "log_get_print_level", 00:06:58.172 "log_set_print_level", 00:06:58.172 "framework_enable_cpumask_locks", 00:06:58.172 "framework_disable_cpumask_locks", 00:06:58.172 "framework_wait_init", 00:06:58.172 "framework_start_init", 00:06:58.172 "scsi_get_devices", 00:06:58.172 "bdev_get_histogram", 00:06:58.172 "bdev_enable_histogram", 00:06:58.172 "bdev_set_qos_limit", 00:06:58.172 "bdev_set_qd_sampling_period", 00:06:58.172 "bdev_get_bdevs", 00:06:58.172 "bdev_reset_iostat", 00:06:58.172 "bdev_get_iostat", 00:06:58.172 "bdev_examine", 00:06:58.172 "bdev_wait_for_examine", 00:06:58.172 "bdev_set_options", 00:06:58.172 "notify_get_notifications", 00:06:58.172 "notify_get_types", 00:06:58.172 "accel_get_stats", 00:06:58.172 "accel_set_options", 00:06:58.172 "accel_set_driver", 00:06:58.172 "accel_crypto_key_destroy", 00:06:58.172 "accel_crypto_keys_get", 00:06:58.172 "accel_crypto_key_create", 00:06:58.172 "accel_assign_opc", 00:06:58.172 "accel_get_module_info", 00:06:58.172 "accel_get_opc_assignments", 00:06:58.172 "vmd_rescan", 00:06:58.172 "vmd_remove_device", 00:06:58.172 "vmd_enable", 00:06:58.172 "sock_get_default_impl", 00:06:58.172 "sock_set_default_impl", 00:06:58.172 "sock_impl_set_options", 00:06:58.172 "sock_impl_get_options", 00:06:58.172 "iobuf_get_stats", 00:06:58.172 "iobuf_set_options", 00:06:58.172 "framework_get_pci_devices", 00:06:58.172 "framework_get_config", 00:06:58.172 "framework_get_subsystems", 00:06:58.172 "trace_get_info", 00:06:58.172 "trace_get_tpoint_group_mask", 00:06:58.172 "trace_disable_tpoint_group", 00:06:58.172 "trace_enable_tpoint_group", 00:06:58.172 "trace_clear_tpoint_mask", 00:06:58.172 "trace_set_tpoint_mask", 00:06:58.172 "keyring_get_keys", 00:06:58.172 "spdk_get_version", 00:06:58.172 "rpc_get_methods" 00:06:58.172 ] 00:06:58.172 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.172 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:58.172 09:32:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 75143 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 75143 ']' 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 75143 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75143 00:06:58.172 killing process with pid 75143 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75143' 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 75143 00:06:58.172 09:32:35 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 75143 00:06:58.740 ************************************ 00:06:58.740 END TEST spdkcli_tcp 00:06:58.740 ************************************ 00:06:58.740 00:06:58.740 real 0m1.729s 00:06:58.740 user 0m2.847s 00:06:58.740 sys 0m0.580s 00:06:58.740 09:32:36 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.740 09:32:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.740 09:32:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.740 09:32:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.740 09:32:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.740 09:32:36 -- common/autotest_common.sh@10 -- # set +x 00:06:58.740 ************************************ 00:06:58.740 START TEST dpdk_mem_utility 00:06:58.740 ************************************ 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.740 * Looking for test storage... 00:06:58.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:58.740 09:32:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:58.740 09:32:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75230 00:06:58.740 09:32:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75230 00:06:58.740 09:32:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 75230 ']' 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.740 09:32:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.999 [2024-07-24 09:32:36.617858] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:06:58.999 [2024-07-24 09:32:36.618001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75230 ] 00:06:58.999 [2024-07-24 09:32:36.787614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.258 [2024-07-24 09:32:36.833185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.828 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.828 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:59.828 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:59.828 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:59.828 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.828 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.828 { 00:06:59.828 "filename": "/tmp/spdk_mem_dump.txt" 00:06:59.828 } 00:06:59.828 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.828 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:59.828 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:59.828 1 heaps totaling size 814.000000 MiB 00:06:59.828 size: 814.000000 MiB heap id: 0 00:06:59.828 end heaps---------- 00:06:59.828 8 mempools totaling size 598.116089 MiB 00:06:59.828 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:59.828 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:59.828 size: 84.521057 MiB name: bdev_io_75230 00:06:59.828 size: 51.011292 MiB name: evtpool_75230 00:06:59.828 size: 50.003479 MiB name: msgpool_75230 00:06:59.828 size: 21.763794 MiB name: PDU_Pool 00:06:59.828 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:59.828 size: 0.026123 MiB name: Session_Pool 00:06:59.828 end mempools------- 00:06:59.828 6 memzones totaling size 4.142822 MiB 00:06:59.828 size: 1.000366 MiB name: RG_ring_0_75230 00:06:59.828 size: 1.000366 MiB name: RG_ring_1_75230 00:06:59.828 size: 1.000366 MiB name: RG_ring_4_75230 00:06:59.828 size: 1.000366 MiB name: RG_ring_5_75230 00:06:59.828 size: 0.125366 MiB name: RG_ring_2_75230 00:06:59.828 size: 0.015991 MiB name: RG_ring_3_75230 00:06:59.828 end memzones------- 00:06:59.828 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.828 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:06:59.828 list of free elements. size: 12.472290 MiB 00:06:59.828 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:59.828 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:59.828 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:59.828 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:59.828 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:59.828 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:59.828 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:59.828 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:59.828 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:59.828 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:06:59.828 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:59.828 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:59.828 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:59.828 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:59.828 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:59.828 list of standard malloc elements. size: 199.265137 MiB 00:06:59.828 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:59.828 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:59.828 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:59.828 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:59.828 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:59.828 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:59.828 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:59.828 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:59.828 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:59.828 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:59.828 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:59.828 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:59.829 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:59.829 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:59.830 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:59.830 list of memzone associated elements. size: 602.262573 MiB 00:06:59.830 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:59.830 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.830 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:59.830 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.830 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:59.830 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75230_0 00:06:59.830 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:59.830 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75230_0 00:06:59.830 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:59.830 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75230_0 00:06:59.830 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:59.830 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.830 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:59.830 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.830 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:59.830 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75230 00:06:59.830 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:59.830 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75230 00:06:59.830 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:59.830 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75230 00:06:59.830 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:59.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.830 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:59.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.830 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:59.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.830 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:59.830 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.830 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:59.830 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75230 00:06:59.830 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:59.830 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75230 00:06:59.830 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:59.830 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75230 00:06:59.830 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:59.830 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75230 00:06:59.830 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:59.830 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75230 00:06:59.830 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:59.830 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.830 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:59.830 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.830 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:59.830 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.830 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:59.830 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75230 00:06:59.830 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:59.830 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.830 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:59.830 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.830 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:59.830 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75230 00:06:59.830 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:59.830 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.830 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:59.830 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75230 00:06:59.830 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:59.830 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75230 00:06:59.830 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:59.830 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.830 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.830 09:32:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75230 00:06:59.830 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 75230 ']' 00:06:59.830 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 75230 00:06:59.830 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:59.830 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75230 00:06:59.831 killing process with pid 75230 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75230' 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 75230 00:06:59.831 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 75230 00:07:00.398 00:07:00.398 real 0m1.572s 00:07:00.398 user 0m1.503s 00:07:00.398 sys 0m0.508s 00:07:00.398 ************************************ 00:07:00.398 END TEST dpdk_mem_utility 00:07:00.398 ************************************ 00:07:00.398 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.398 09:32:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.398 09:32:38 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.398 09:32:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.398 09:32:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.398 09:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:00.398 ************************************ 00:07:00.398 START TEST event 00:07:00.398 ************************************ 00:07:00.398 09:32:38 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.398 * Looking for test storage... 00:07:00.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:00.398 09:32:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.398 09:32:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.398 09:32:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.398 09:32:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:00.398 09:32:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.398 09:32:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.398 ************************************ 00:07:00.398 START TEST event_perf 00:07:00.398 ************************************ 00:07:00.398 09:32:38 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.657 Running I/O for 1 seconds...[2024-07-24 09:32:38.220042] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:00.657 [2024-07-24 09:32:38.220274] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75303 ] 00:07:00.657 [2024-07-24 09:32:38.389750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.657 [2024-07-24 09:32:38.437013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.657 [2024-07-24 09:32:38.437223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.657 [2024-07-24 09:32:38.437259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.657 Running I/O for 1 seconds...[2024-07-24 09:32:38.437386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.034 00:07:02.034 lcore 0: 205957 00:07:02.034 lcore 1: 205956 00:07:02.034 lcore 2: 205955 00:07:02.034 lcore 3: 205955 00:07:02.034 done. 00:07:02.034 00:07:02.034 real 0m1.364s 00:07:02.034 user 0m4.102s 00:07:02.034 sys 0m0.142s 00:07:02.034 09:32:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.034 09:32:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.034 ************************************ 00:07:02.034 END TEST event_perf 00:07:02.034 ************************************ 00:07:02.034 09:32:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.034 09:32:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.034 09:32:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.034 09:32:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.034 ************************************ 00:07:02.034 START TEST event_reactor 00:07:02.034 ************************************ 00:07:02.034 09:32:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.034 [2024-07-24 09:32:39.656492] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:02.034 [2024-07-24 09:32:39.656652] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75342 ] 00:07:02.034 [2024-07-24 09:32:39.825089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.297 [2024-07-24 09:32:39.875551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.232 test_start 00:07:03.232 oneshot 00:07:03.232 tick 100 00:07:03.232 tick 100 00:07:03.232 tick 250 00:07:03.232 tick 100 00:07:03.232 tick 100 00:07:03.232 tick 250 00:07:03.232 tick 100 00:07:03.232 tick 500 00:07:03.232 tick 100 00:07:03.232 tick 100 00:07:03.232 tick 250 00:07:03.232 tick 100 00:07:03.232 tick 100 00:07:03.232 test_end 00:07:03.232 00:07:03.232 real 0m1.355s 00:07:03.232 user 0m1.141s 00:07:03.232 sys 0m0.106s 00:07:03.232 09:32:40 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.232 ************************************ 00:07:03.232 END TEST event_reactor 00:07:03.232 ************************************ 00:07:03.232 09:32:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:03.232 09:32:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.232 09:32:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:03.232 09:32:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.232 09:32:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.232 ************************************ 00:07:03.232 START TEST event_reactor_perf 00:07:03.232 ************************************ 00:07:03.232 09:32:41 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.490 [2024-07-24 09:32:41.071875] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:03.490 [2024-07-24 09:32:41.072019] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75379 ] 00:07:03.490 [2024-07-24 09:32:41.240402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.490 [2024-07-24 09:32:41.290662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.866 test_start 00:07:04.866 test_end 00:07:04.866 Performance: 369090 events per second 00:07:04.866 ************************************ 00:07:04.866 END TEST event_reactor_perf 00:07:04.866 ************************************ 00:07:04.866 00:07:04.866 real 0m1.354s 00:07:04.866 user 0m1.132s 00:07:04.866 sys 0m0.114s 00:07:04.866 09:32:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.866 09:32:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.866 09:32:42 event -- event/event.sh@49 -- # uname -s 00:07:04.866 09:32:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.866 09:32:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.866 09:32:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.866 09:32:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.866 09:32:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.866 ************************************ 00:07:04.866 START TEST event_scheduler 00:07:04.866 ************************************ 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.866 * Looking for test storage... 00:07:04.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:04.866 09:32:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.866 09:32:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75441 00:07:04.866 09:32:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.866 09:32:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.866 09:32:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75441 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 75441 ']' 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.866 09:32:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.866 [2024-07-24 09:32:42.659168] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:04.866 [2024-07-24 09:32:42.659516] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75441 ] 00:07:05.124 [2024-07-24 09:32:42.828519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.124 [2024-07-24 09:32:42.883039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.124 [2024-07-24 09:32:42.883273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.124 [2024-07-24 09:32:42.883409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.124 [2024-07-24 09:32:42.883428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.690 09:32:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.690 09:32:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:05.690 09:32:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.690 09:32:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.690 09:32:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.690 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.691 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.691 POWER: Cannot set governor of lcore 0 to performance 00:07:05.691 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.691 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.691 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.691 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.691 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:05.691 POWER: Unable to set Power Management Environment for lcore 0 00:07:05.691 [2024-07-24 09:32:43.492657] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:05.691 [2024-07-24 09:32:43.492678] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:05.691 [2024-07-24 09:32:43.492706] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.691 [2024-07-24 09:32:43.492726] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.691 [2024-07-24 09:32:43.492740] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.691 [2024-07-24 09:32:43.492753] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.691 09:32:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.691 09:32:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.691 09:32:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.691 09:32:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 [2024-07-24 09:32:43.574828] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.949 09:32:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.949 09:32:43 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.949 09:32:43 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 ************************************ 00:07:05.949 START TEST scheduler_create_thread 00:07:05.949 ************************************ 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 2 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 3 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 4 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 5 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 6 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 7 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 8 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.949 9 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.949 09:32:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.950 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.950 09:32:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.561 10 00:07:06.561 09:32:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.561 09:32:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:06.561 09:32:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.561 09:32:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.941 09:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.941 09:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.941 09:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.941 09:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.941 09:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.508 09:32:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.508 09:32:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:08.508 09:32:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.508 09:32:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.443 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.443 09:32:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:09.443 09:32:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:09.443 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.443 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.012 ************************************ 00:07:10.012 END TEST scheduler_create_thread 00:07:10.012 ************************************ 00:07:10.012 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.013 00:07:10.013 real 0m4.211s 00:07:10.013 user 0m0.028s 00:07:10.013 sys 0m0.009s 00:07:10.013 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.013 09:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 09:32:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.272 09:32:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75441 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 75441 ']' 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 75441 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75441 00:07:10.272 killing process with pid 75441 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75441' 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 75441 00:07:10.272 09:32:47 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 75441 00:07:10.272 [2024-07-24 09:32:48.081654] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.843 00:07:10.843 real 0m5.945s 00:07:10.843 user 0m12.781s 00:07:10.843 sys 0m0.507s 00:07:10.843 09:32:48 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.843 ************************************ 00:07:10.843 END TEST event_scheduler 00:07:10.843 ************************************ 00:07:10.843 09:32:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 09:32:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.843 09:32:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.843 09:32:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.843 09:32:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.843 09:32:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 ************************************ 00:07:10.843 START TEST app_repeat 00:07:10.843 ************************************ 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75555 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.843 Process app_repeat pid: 75555 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75555' 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.843 spdk_app_start Round 0 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.843 09:32:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75555 /var/tmp/spdk-nbd.sock 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 75555 ']' 00:07:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.843 09:32:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 [2024-07-24 09:32:48.530474] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:10.843 [2024-07-24 09:32:48.531183] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75555 ] 00:07:11.112 [2024-07-24 09:32:48.699315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.112 [2024-07-24 09:32:48.755002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.112 [2024-07-24 09:32:48.755101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.679 09:32:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.679 09:32:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:11.679 09:32:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.938 Malloc0 00:07:11.938 09:32:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.938 Malloc1 00:07:12.197 09:32:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.197 /dev/nbd0 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.197 09:32:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.197 09:32:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.198 1+0 records in 00:07:12.198 1+0 records out 00:07:12.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243745 s, 16.8 MB/s 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.198 09:32:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:12.198 09:32:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.198 09:32:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.198 09:32:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.456 /dev/nbd1 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.456 1+0 records in 00:07:12.456 1+0 records out 00:07:12.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514611 s, 8.0 MB/s 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.456 09:32:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.456 09:32:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.714 { 00:07:12.714 "nbd_device": "/dev/nbd0", 00:07:12.714 "bdev_name": "Malloc0" 00:07:12.714 }, 00:07:12.714 { 00:07:12.714 "nbd_device": "/dev/nbd1", 00:07:12.714 "bdev_name": "Malloc1" 00:07:12.714 } 00:07:12.714 ]' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.714 { 00:07:12.714 "nbd_device": "/dev/nbd0", 00:07:12.714 "bdev_name": "Malloc0" 00:07:12.714 }, 00:07:12.714 { 00:07:12.714 "nbd_device": "/dev/nbd1", 00:07:12.714 "bdev_name": "Malloc1" 00:07:12.714 } 00:07:12.714 ]' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.714 /dev/nbd1' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.714 /dev/nbd1' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.714 256+0 records in 00:07:12.714 256+0 records out 00:07:12.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117178 s, 89.5 MB/s 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.714 09:32:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.715 256+0 records in 00:07:12.715 256+0 records out 00:07:12.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297031 s, 35.3 MB/s 00:07:12.715 09:32:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.715 09:32:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.973 256+0 records in 00:07:12.973 256+0 records out 00:07:12.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235693 s, 44.5 MB/s 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.973 09:32:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.974 09:32:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.974 09:32:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.232 09:32:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.490 09:32:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.490 09:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.491 09:32:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.491 09:32:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.748 09:32:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.007 [2024-07-24 09:32:51.620520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.007 [2024-07-24 09:32:51.669752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.007 [2024-07-24 09:32:51.669761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.007 [2024-07-24 09:32:51.713504] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.007 [2024-07-24 09:32:51.713586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.303 spdk_app_start Round 1 00:07:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.303 09:32:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.303 09:32:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.303 09:32:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75555 /var/tmp/spdk-nbd.sock 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 75555 ']' 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.303 09:32:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:17.303 09:32:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.303 Malloc0 00:07:17.303 09:32:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.303 Malloc1 00:07:17.303 09:32:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.303 09:32:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.561 /dev/nbd0 00:07:17.561 09:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.561 09:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.561 1+0 records in 00:07:17.561 1+0 records out 00:07:17.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266385 s, 15.4 MB/s 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.561 09:32:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.561 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.561 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.561 09:32:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.820 /dev/nbd1 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.820 1+0 records in 00:07:17.820 1+0 records out 00:07:17.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373299 s, 11.0 MB/s 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.820 09:32:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.820 09:32:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.079 { 00:07:18.079 "nbd_device": "/dev/nbd0", 00:07:18.079 "bdev_name": "Malloc0" 00:07:18.079 }, 00:07:18.079 { 00:07:18.079 "nbd_device": "/dev/nbd1", 00:07:18.079 "bdev_name": "Malloc1" 00:07:18.079 } 00:07:18.079 ]' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.079 { 00:07:18.079 "nbd_device": "/dev/nbd0", 00:07:18.079 "bdev_name": "Malloc0" 00:07:18.079 }, 00:07:18.079 { 00:07:18.079 "nbd_device": "/dev/nbd1", 00:07:18.079 "bdev_name": "Malloc1" 00:07:18.079 } 00:07:18.079 ]' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.079 /dev/nbd1' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.079 /dev/nbd1' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.079 256+0 records in 00:07:18.079 256+0 records out 00:07:18.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111746 s, 93.8 MB/s 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.079 256+0 records in 00:07:18.079 256+0 records out 00:07:18.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284081 s, 36.9 MB/s 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.079 256+0 records in 00:07:18.079 256+0 records out 00:07:18.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263138 s, 39.8 MB/s 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.079 09:32:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.337 09:32:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.337 09:32:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.595 09:32:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.853 09:32:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.853 09:32:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.111 09:32:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.370 [2024-07-24 09:32:56.940323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.370 [2024-07-24 09:32:56.989834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.370 [2024-07-24 09:32:56.989860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.370 [2024-07-24 09:32:57.033646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.370 [2024-07-24 09:32:57.033725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.660 spdk_app_start Round 2 00:07:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.660 09:32:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.660 09:32:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.660 09:32:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75555 /var/tmp/spdk-nbd.sock 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 75555 ']' 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.660 09:32:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:22.660 09:32:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.660 Malloc0 00:07:22.660 09:33:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.660 Malloc1 00:07:22.660 09:33:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.660 09:33:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.920 /dev/nbd0 00:07:22.920 09:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.920 09:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.920 1+0 records in 00:07:22.920 1+0 records out 00:07:22.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038828 s, 10.5 MB/s 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.920 09:33:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:22.920 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.920 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.920 09:33:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:23.178 /dev/nbd1 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.178 1+0 records in 00:07:23.178 1+0 records out 00:07:23.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321522 s, 12.7 MB/s 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.178 09:33:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.178 09:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.436 { 00:07:23.436 "nbd_device": "/dev/nbd0", 00:07:23.436 "bdev_name": "Malloc0" 00:07:23.436 }, 00:07:23.436 { 00:07:23.436 "nbd_device": "/dev/nbd1", 00:07:23.436 "bdev_name": "Malloc1" 00:07:23.436 } 00:07:23.436 ]' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.436 { 00:07:23.436 "nbd_device": "/dev/nbd0", 00:07:23.436 "bdev_name": "Malloc0" 00:07:23.436 }, 00:07:23.436 { 00:07:23.436 "nbd_device": "/dev/nbd1", 00:07:23.436 "bdev_name": "Malloc1" 00:07:23.436 } 00:07:23.436 ]' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.436 /dev/nbd1' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.436 /dev/nbd1' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:23.436 256+0 records in 00:07:23.436 256+0 records out 00:07:23.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130898 s, 80.1 MB/s 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.436 09:33:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.437 256+0 records in 00:07:23.437 256+0 records out 00:07:23.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267995 s, 39.1 MB/s 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.437 256+0 records in 00:07:23.437 256+0 records out 00:07:23.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276844 s, 37.9 MB/s 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.437 09:33:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.694 09:33:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.952 09:33:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.210 09:33:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.210 09:33:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.468 09:33:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.725 [2024-07-24 09:33:02.288040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.725 [2024-07-24 09:33:02.329291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.725 [2024-07-24 09:33:02.329300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.725 [2024-07-24 09:33:02.373088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.725 [2024-07-24 09:33:02.373158] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:28.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.008 09:33:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75555 /var/tmp/spdk-nbd.sock 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 75555 ']' 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:28.008 09:33:05 event.app_repeat -- event/event.sh@39 -- # killprocess 75555 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 75555 ']' 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 75555 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75555 00:07:28.008 killing process with pid 75555 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.008 09:33:05 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.009 09:33:05 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75555' 00:07:28.009 09:33:05 event.app_repeat -- common/autotest_common.sh@969 -- # kill 75555 00:07:28.009 09:33:05 event.app_repeat -- common/autotest_common.sh@974 -- # wait 75555 00:07:28.009 spdk_app_start is called in Round 0. 00:07:28.009 Shutdown signal received, stop current app iteration 00:07:28.009 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 reinitialization... 00:07:28.009 spdk_app_start is called in Round 1. 00:07:28.009 Shutdown signal received, stop current app iteration 00:07:28.009 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 reinitialization... 00:07:28.009 spdk_app_start is called in Round 2. 00:07:28.009 Shutdown signal received, stop current app iteration 00:07:28.009 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 reinitialization... 00:07:28.009 spdk_app_start is called in Round 3. 00:07:28.009 Shutdown signal received, stop current app iteration 00:07:28.009 09:33:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:28.009 09:33:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:28.009 00:07:28.009 real 0m17.124s 00:07:28.009 user 0m37.176s 00:07:28.009 sys 0m2.904s 00:07:28.009 09:33:05 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.009 09:33:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.009 ************************************ 00:07:28.009 END TEST app_repeat 00:07:28.009 ************************************ 00:07:28.009 09:33:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:28.009 09:33:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:28.009 09:33:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.009 09:33:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.009 09:33:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.009 ************************************ 00:07:28.009 START TEST cpu_locks 00:07:28.009 ************************************ 00:07:28.009 09:33:05 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:28.009 * Looking for test storage... 00:07:28.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:28.009 09:33:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:28.009 09:33:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:28.009 09:33:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:28.009 09:33:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:28.009 09:33:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.009 09:33:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.009 09:33:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.350 ************************************ 00:07:28.350 START TEST default_locks 00:07:28.350 ************************************ 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75973 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75973 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 75973 ']' 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.350 09:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.350 [2024-07-24 09:33:05.935241] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:28.350 [2024-07-24 09:33:05.935805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75973 ] 00:07:28.644 [2024-07-24 09:33:06.105902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.644 [2024-07-24 09:33:06.158264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.212 09:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.212 09:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:29.212 09:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75973 00:07:29.212 09:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75973 00:07:29.212 09:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75973 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 75973 ']' 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 75973 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75973 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.780 killing process with pid 75973 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75973' 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 75973 00:07:29.780 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 75973 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75973 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 75973 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 75973 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 75973 ']' 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.039 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (75973) - No such process 00:07:30.039 ERROR: process (pid: 75973) is no longer running 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:30.039 00:07:30.039 real 0m1.918s 00:07:30.039 user 0m1.928s 00:07:30.039 sys 0m0.683s 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.039 09:33:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.039 ************************************ 00:07:30.039 END TEST default_locks 00:07:30.039 ************************************ 00:07:30.039 09:33:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:30.039 09:33:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.039 09:33:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.039 09:33:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.039 ************************************ 00:07:30.039 START TEST default_locks_via_rpc 00:07:30.039 ************************************ 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76021 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76021 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 76021 ']' 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.039 09:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.300 [2024-07-24 09:33:07.930938] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:30.300 [2024-07-24 09:33:07.931089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76021 ] 00:07:30.300 [2024-07-24 09:33:08.104513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.559 [2024-07-24 09:33:08.159362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:31.126 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76021 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76021 00:07:31.127 09:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76021 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 76021 ']' 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 76021 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76021 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.696 killing process with pid 76021 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76021' 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 76021 00:07:31.696 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 76021 00:07:31.954 00:07:31.954 real 0m1.931s 00:07:31.954 user 0m1.936s 00:07:31.954 sys 0m0.707s 00:07:31.954 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.954 09:33:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.954 ************************************ 00:07:31.954 END TEST default_locks_via_rpc 00:07:31.954 ************************************ 00:07:32.213 09:33:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:32.214 09:33:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.214 09:33:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.214 09:33:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.214 ************************************ 00:07:32.214 START TEST non_locking_app_on_locked_coremask 00:07:32.214 ************************************ 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76073 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76073 /var/tmp/spdk.sock 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76073 ']' 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.214 09:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.214 [2024-07-24 09:33:09.930678] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:32.214 [2024-07-24 09:33:09.930808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76073 ] 00:07:32.473 [2024-07-24 09:33:10.101673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.473 [2024-07-24 09:33:10.146079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76089 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76089 /var/tmp/spdk2.sock 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76089 ']' 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.041 09:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.041 [2024-07-24 09:33:10.819141] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:33.041 [2024-07-24 09:33:10.819275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76089 ] 00:07:33.300 [2024-07-24 09:33:10.981162] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.300 [2024-07-24 09:33:10.985253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.300 [2024-07-24 09:33:11.078380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.869 09:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.869 09:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.869 09:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76073 00:07:33.869 09:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76073 00:07:33.869 09:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76073 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 76073 ']' 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 76073 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76073 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.807 killing process with pid 76073 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76073' 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 76073 00:07:34.807 09:33:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 76073 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76089 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 76089 ']' 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 76089 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76089 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.741 killing process with pid 76089 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76089' 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 76089 00:07:35.741 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 76089 00:07:35.999 00:07:35.999 real 0m3.971s 00:07:35.999 user 0m4.210s 00:07:35.999 sys 0m1.271s 00:07:35.999 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.999 09:33:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.999 ************************************ 00:07:35.999 END TEST non_locking_app_on_locked_coremask 00:07:35.999 ************************************ 00:07:36.257 09:33:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:36.257 09:33:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.257 09:33:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.257 09:33:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.257 ************************************ 00:07:36.257 START TEST locking_app_on_unlocked_coremask 00:07:36.257 ************************************ 00:07:36.257 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:36.257 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76158 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76158 /var/tmp/spdk.sock 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76158 ']' 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.258 09:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.258 [2024-07-24 09:33:13.981047] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:36.258 [2024-07-24 09:33:13.981184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76158 ] 00:07:36.515 [2024-07-24 09:33:14.150723] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.515 [2024-07-24 09:33:14.150794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.515 [2024-07-24 09:33:14.201782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76173 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76173 /var/tmp/spdk2.sock 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76173 ']' 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.083 09:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.083 [2024-07-24 09:33:14.899055] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:37.083 [2024-07-24 09:33:14.899880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76173 ] 00:07:37.342 [2024-07-24 09:33:15.070175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.601 [2024-07-24 09:33:15.174823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.167 09:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.167 09:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:38.167 09:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76173 00:07:38.167 09:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76173 00:07:38.167 09:33:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.733 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76158 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 76158 ']' 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 76158 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76158 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.734 killing process with pid 76158 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76158' 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 76158 00:07:38.734 09:33:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 76158 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76173 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 76173 ']' 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 76173 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76173 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.670 killing process with pid 76173 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76173' 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 76173 00:07:39.670 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 76173 00:07:39.928 00:07:39.928 real 0m3.811s 00:07:39.928 user 0m4.000s 00:07:39.928 sys 0m1.254s 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.928 ************************************ 00:07:39.928 END TEST locking_app_on_unlocked_coremask 00:07:39.928 ************************************ 00:07:39.928 09:33:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:39.928 09:33:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.928 09:33:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.928 09:33:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.928 ************************************ 00:07:39.928 START TEST locking_app_on_locked_coremask 00:07:39.928 ************************************ 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76237 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76237 /var/tmp/spdk.sock 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76237 ']' 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.928 09:33:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 [2024-07-24 09:33:17.822791] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:40.186 [2024-07-24 09:33:17.822987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76237 ] 00:07:40.186 [2024-07-24 09:33:17.987537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.444 [2024-07-24 09:33:18.042145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.010 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.010 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.010 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76253 00:07:41.010 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.010 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76253 /var/tmp/spdk2.sock 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 76253 /var/tmp/spdk2.sock 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 76253 /var/tmp/spdk2.sock 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 76253 ']' 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.011 09:33:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.011 [2024-07-24 09:33:18.764973] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:41.011 [2024-07-24 09:33:18.765114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76253 ] 00:07:41.269 [2024-07-24 09:33:18.931340] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76237 has claimed it. 00:07:41.269 [2024-07-24 09:33:18.931428] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:41.837 ERROR: process (pid: 76253) is no longer running 00:07:41.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (76253) - No such process 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76237 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76237 00:07:41.837 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76237 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 76237 ']' 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 76237 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76237 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.096 killing process with pid 76237 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76237' 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 76237 00:07:42.096 09:33:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 76237 00:07:42.664 ************************************ 00:07:42.664 END TEST locking_app_on_locked_coremask 00:07:42.664 ************************************ 00:07:42.664 00:07:42.664 real 0m2.496s 00:07:42.664 user 0m2.694s 00:07:42.664 sys 0m0.770s 00:07:42.664 09:33:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.664 09:33:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.664 09:33:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:42.664 09:33:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.664 09:33:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.664 09:33:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.664 ************************************ 00:07:42.664 START TEST locking_overlapped_coremask 00:07:42.664 ************************************ 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76301 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76301 /var/tmp/spdk.sock 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 76301 ']' 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.664 09:33:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.664 [2024-07-24 09:33:20.385338] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:42.664 [2024-07-24 09:33:20.385481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ] 00:07:42.923 [2024-07-24 09:33:20.553342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.923 [2024-07-24 09:33:20.602673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.923 [2024-07-24 09:33:20.602766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.923 [2024-07-24 09:33:20.602866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76319 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76319 /var/tmp/spdk2.sock 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 76319 /var/tmp/spdk2.sock 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 76319 /var/tmp/spdk2.sock 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 76319 ']' 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.492 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 [2024-07-24 09:33:21.268848] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:43.492 [2024-07-24 09:33:21.268982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76319 ] 00:07:43.751 [2024-07-24 09:33:21.435418] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76301 has claimed it. 00:07:43.751 [2024-07-24 09:33:21.435479] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.319 ERROR: process (pid: 76319) is no longer running 00:07:44.319 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (76319) - No such process 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76301 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 76301 ']' 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 76301 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76301 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76301' 00:07:44.319 killing process with pid 76301 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 76301 00:07:44.319 09:33:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 76301 00:07:44.578 00:07:44.578 real 0m2.057s 00:07:44.578 user 0m5.349s 00:07:44.578 sys 0m0.583s 00:07:44.578 09:33:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.578 09:33:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.578 ************************************ 00:07:44.578 END TEST locking_overlapped_coremask 00:07:44.578 ************************************ 00:07:44.837 09:33:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:44.837 09:33:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.837 09:33:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.837 09:33:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.837 ************************************ 00:07:44.837 START TEST locking_overlapped_coremask_via_rpc 00:07:44.837 ************************************ 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:44.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76361 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76361 /var/tmp/spdk.sock 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 76361 ']' 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.837 09:33:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.837 [2024-07-24 09:33:22.521670] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:44.837 [2024-07-24 09:33:22.522034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:07:45.095 [2024-07-24 09:33:22.689434] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.095 [2024-07-24 09:33:22.689803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.095 [2024-07-24 09:33:22.742458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.095 [2024-07-24 09:33:22.742552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.096 [2024-07-24 09:33:22.742661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76379 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76379 /var/tmp/spdk2.sock 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 76379 ']' 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.662 09:33:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.662 [2024-07-24 09:33:23.394301] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:45.662 [2024-07-24 09:33:23.394644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:07:45.920 [2024-07-24 09:33:23.558709] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.920 [2024-07-24 09:33:23.558816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.920 [2024-07-24 09:33:23.656869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.920 [2024-07-24 09:33:23.660371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.920 [2024-07-24 09:33:23.660473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.487 [2024-07-24 09:33:24.242387] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76361 has claimed it. 00:07:46.487 request: 00:07:46.487 { 00:07:46.487 "method": "framework_enable_cpumask_locks", 00:07:46.487 "req_id": 1 00:07:46.487 } 00:07:46.487 Got JSON-RPC error response 00:07:46.487 response: 00:07:46.487 { 00:07:46.487 "code": -32603, 00:07:46.487 "message": "Failed to claim CPU core: 2" 00:07:46.487 } 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76361 /var/tmp/spdk.sock 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 76361 ']' 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.487 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76379 /var/tmp/spdk2.sock 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 76379 ']' 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.746 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:47.005 ************************************ 00:07:47.005 END TEST locking_overlapped_coremask_via_rpc 00:07:47.005 ************************************ 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:47.005 00:07:47.005 real 0m2.249s 00:07:47.005 user 0m0.977s 00:07:47.005 sys 0m0.194s 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.005 09:33:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 09:33:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:47.005 09:33:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76361 ]] 00:07:47.005 09:33:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76361 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 76361 ']' 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 76361 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76361 00:07:47.005 killing process with pid 76361 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76361' 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 76361 00:07:47.005 09:33:24 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 76361 00:07:47.573 09:33:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76379 ]] 00:07:47.573 09:33:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76379 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 76379 ']' 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 76379 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76379 00:07:47.573 killing process with pid 76379 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76379' 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 76379 00:07:47.573 09:33:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 76379 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76361 ]] 00:07:47.831 Process with pid 76361 is not found 00:07:47.831 Process with pid 76379 is not found 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76361 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 76361 ']' 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 76361 00:07:47.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76361) - No such process 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 76361 is not found' 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76379 ]] 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76379 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 76379 ']' 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 76379 00:07:47.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76379) - No such process 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 76379 is not found' 00:07:47.831 09:33:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:47.831 00:07:47.831 real 0m19.938s 00:07:47.831 user 0m32.123s 00:07:47.831 sys 0m6.552s 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.831 09:33:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.831 ************************************ 00:07:47.831 END TEST cpu_locks 00:07:47.831 ************************************ 00:07:48.090 00:07:48.090 real 0m47.638s 00:07:48.090 user 1m28.632s 00:07:48.090 sys 0m10.688s 00:07:48.090 09:33:25 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.090 09:33:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.090 ************************************ 00:07:48.090 END TEST event 00:07:48.090 ************************************ 00:07:48.090 09:33:25 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:48.090 09:33:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.090 09:33:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.090 09:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:48.090 ************************************ 00:07:48.090 START TEST thread 00:07:48.090 ************************************ 00:07:48.090 09:33:25 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:48.090 * Looking for test storage... 00:07:48.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:48.090 09:33:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:48.090 09:33:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:48.090 09:33:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.090 09:33:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.090 ************************************ 00:07:48.090 START TEST thread_poller_perf 00:07:48.090 ************************************ 00:07:48.090 09:33:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:48.349 [2024-07-24 09:33:25.921361] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:48.349 [2024-07-24 09:33:25.921630] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76498 ] 00:07:48.349 [2024-07-24 09:33:26.087384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.349 [2024-07-24 09:33:26.134314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.349 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:49.726 ====================================== 00:07:49.726 busy:2498531336 (cyc) 00:07:49.726 total_run_count: 389000 00:07:49.726 tsc_hz: 2490000000 (cyc) 00:07:49.726 ====================================== 00:07:49.726 poller_cost: 6422 (cyc), 2579 (nsec) 00:07:49.726 00:07:49.726 real 0m1.359s 00:07:49.726 user 0m1.149s 00:07:49.726 sys 0m0.102s 00:07:49.726 09:33:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.726 ************************************ 00:07:49.726 END TEST thread_poller_perf 00:07:49.726 ************************************ 00:07:49.726 09:33:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.726 09:33:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:49.726 09:33:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:49.726 09:33:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.726 09:33:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.726 ************************************ 00:07:49.726 START TEST thread_poller_perf 00:07:49.726 ************************************ 00:07:49.726 09:33:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:49.726 [2024-07-24 09:33:27.349436] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:49.726 [2024-07-24 09:33:27.349592] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76540 ] 00:07:49.726 [2024-07-24 09:33:27.517347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.986 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:49.986 [2024-07-24 09:33:27.560945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.925 ====================================== 00:07:50.925 busy:2494215120 (cyc) 00:07:50.925 total_run_count: 5092000 00:07:50.925 tsc_hz: 2490000000 (cyc) 00:07:50.925 ====================================== 00:07:50.925 poller_cost: 489 (cyc), 196 (nsec) 00:07:50.925 00:07:50.925 real 0m1.351s 00:07:50.925 user 0m1.123s 00:07:50.925 sys 0m0.120s 00:07:50.925 09:33:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.925 09:33:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 ************************************ 00:07:50.925 END TEST thread_poller_perf 00:07:50.925 ************************************ 00:07:50.925 09:33:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:50.925 00:07:50.925 real 0m2.974s 00:07:50.925 user 0m2.367s 00:07:50.925 sys 0m0.400s 00:07:50.925 09:33:28 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.925 09:33:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 ************************************ 00:07:50.925 END TEST thread 00:07:50.925 ************************************ 00:07:51.184 09:33:28 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:51.184 09:33:28 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.184 09:33:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.184 09:33:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.184 09:33:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 ************************************ 00:07:51.184 START TEST app_cmdline 00:07:51.184 ************************************ 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.184 * Looking for test storage... 00:07:51.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.184 09:33:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.184 09:33:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=76610 00:07:51.184 09:33:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.184 09:33:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 76610 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 76610 ']' 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.184 09:33:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.443 [2024-07-24 09:33:29.003751] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:51.443 [2024-07-24 09:33:29.003886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76610 ] 00:07:51.443 [2024-07-24 09:33:29.170710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.443 [2024-07-24 09:33:29.217853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.011 09:33:29 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.011 09:33:29 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:52.011 09:33:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:52.269 { 00:07:52.269 "version": "SPDK v24.09-pre git sha1 8711e7e9b", 00:07:52.269 "fields": { 00:07:52.269 "major": 24, 00:07:52.269 "minor": 9, 00:07:52.269 "patch": 0, 00:07:52.269 "suffix": "-pre", 00:07:52.269 "commit": "8711e7e9b" 00:07:52.269 } 00:07:52.269 } 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:52.269 09:33:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:52.269 09:33:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.269 09:33:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.269 09:33:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.269 09:33:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:52.269 09:33:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:52.269 09:33:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.269 09:33:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:52.269 09:33:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.269 09:33:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.269 09:33:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:52.270 09:33:30 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.528 request: 00:07:52.528 { 00:07:52.528 "method": "env_dpdk_get_mem_stats", 00:07:52.528 "req_id": 1 00:07:52.528 } 00:07:52.528 Got JSON-RPC error response 00:07:52.528 response: 00:07:52.528 { 00:07:52.528 "code": -32601, 00:07:52.528 "message": "Method not found" 00:07:52.528 } 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.528 09:33:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 76610 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 76610 ']' 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 76610 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76610 00:07:52.528 killing process with pid 76610 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76610' 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 76610 00:07:52.528 09:33:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 76610 00:07:53.097 ************************************ 00:07:53.097 END TEST app_cmdline 00:07:53.097 ************************************ 00:07:53.097 00:07:53.097 real 0m1.847s 00:07:53.097 user 0m2.031s 00:07:53.097 sys 0m0.535s 00:07:53.097 09:33:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.097 09:33:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 09:33:30 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.097 09:33:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.097 09:33:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.097 09:33:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 ************************************ 00:07:53.097 START TEST version 00:07:53.097 ************************************ 00:07:53.097 09:33:30 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.097 * Looking for test storage... 00:07:53.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.097 09:33:30 version -- app/version.sh@17 -- # get_header_version major 00:07:53.098 09:33:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # cut -f2 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.098 09:33:30 version -- app/version.sh@17 -- # major=24 00:07:53.098 09:33:30 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.098 09:33:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # cut -f2 00:07:53.098 09:33:30 version -- app/version.sh@18 -- # minor=9 00:07:53.098 09:33:30 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.098 09:33:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # cut -f2 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.098 09:33:30 version -- app/version.sh@19 -- # patch=0 00:07:53.098 09:33:30 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.098 09:33:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # cut -f2 00:07:53.098 09:33:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.098 09:33:30 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.098 09:33:30 version -- app/version.sh@22 -- # version=24.9 00:07:53.098 09:33:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.098 09:33:30 version -- app/version.sh@28 -- # version=24.9rc0 00:07:53.098 09:33:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.098 09:33:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.357 09:33:30 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:53.357 09:33:30 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:53.357 00:07:53.357 real 0m0.220s 00:07:53.357 user 0m0.119s 00:07:53.357 sys 0m0.154s 00:07:53.357 ************************************ 00:07:53.357 END TEST version 00:07:53.357 ************************************ 00:07:53.357 09:33:30 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.357 09:33:30 version -- common/autotest_common.sh@10 -- # set +x 00:07:53.357 09:33:30 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:53.357 09:33:30 -- spdk/autotest.sh@202 -- # uname -s 00:07:53.357 09:33:30 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:53.357 09:33:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:53.357 09:33:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:53.357 09:33:30 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:07:53.357 09:33:30 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:53.357 09:33:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.357 09:33:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.357 09:33:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.357 ************************************ 00:07:53.357 START TEST blockdev_nvme 00:07:53.357 ************************************ 00:07:53.357 09:33:30 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:53.357 * Looking for test storage... 00:07:53.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:53.357 09:33:31 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76755 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:53.357 09:33:31 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 76755 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 76755 ']' 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.357 09:33:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 [2024-07-24 09:33:31.239506] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:53.616 [2024-07-24 09:33:31.239663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76755 ] 00:07:53.616 [2024-07-24 09:33:31.406434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.874 [2024-07-24 09:33:31.451889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.442 09:33:32 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.442 09:33:32 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:54.442 09:33:32 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:54.442 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.442 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:54.701 09:33:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 09:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.960 09:33:32 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.960 09:33:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:54.960 09:33:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:54.961 09:33:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "65b21965-1bc1-42f4-b53a-11f1015c214f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "65b21965-1bc1-42f4-b53a-11f1015c214f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "60f6a305-c426-4e25-acc9-7d297cb268f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "60f6a305-c426-4e25-acc9-7d297cb268f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2e17542d-cbda-4f7e-81fc-f0cb624f0b1f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e17542d-cbda-4f7e-81fc-f0cb624f0b1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ca085998-1f2a-46f5-b768-c17d7d68ab84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca085998-1f2a-46f5-b768-c17d7d68ab84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e76818ce-7305-42ee-b3ac-f7bc9db5c140"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e76818ce-7305-42ee-b3ac-f7bc9db5c140",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "147b80a4-8b73-4168-a66e-8c34ac06572e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "147b80a4-8b73-4168-a66e-8c34ac06572e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:54.961 09:33:32 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:54.961 09:33:32 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:54.961 09:33:32 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:54.961 09:33:32 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 76755 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 76755 ']' 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 76755 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76755 00:07:54.961 killing process with pid 76755 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76755' 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 76755 00:07:54.961 09:33:32 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 76755 00:07:55.527 09:33:33 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:55.527 09:33:33 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:55.527 09:33:33 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:55.527 09:33:33 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.527 09:33:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.527 ************************************ 00:07:55.527 START TEST bdev_hello_world 00:07:55.527 ************************************ 00:07:55.527 09:33:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:55.527 [2024-07-24 09:33:33.173801] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:55.527 [2024-07-24 09:33:33.173984] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76828 ] 00:07:55.794 [2024-07-24 09:33:33.352030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.794 [2024-07-24 09:33:33.400891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.056 [2024-07-24 09:33:33.792769] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:56.056 [2024-07-24 09:33:33.792848] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:56.056 [2024-07-24 09:33:33.792887] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:56.056 [2024-07-24 09:33:33.795333] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:56.056 [2024-07-24 09:33:33.795729] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:56.056 [2024-07-24 09:33:33.795790] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:56.056 [2024-07-24 09:33:33.796171] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:56.056 00:07:56.056 [2024-07-24 09:33:33.796217] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:56.315 00:07:56.315 real 0m0.945s 00:07:56.315 user 0m0.604s 00:07:56.315 sys 0m0.236s 00:07:56.315 ************************************ 00:07:56.315 END TEST bdev_hello_world 00:07:56.315 ************************************ 00:07:56.315 09:33:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.315 09:33:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:56.315 09:33:34 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:56.315 09:33:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.315 09:33:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.315 09:33:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.315 ************************************ 00:07:56.315 START TEST bdev_bounds 00:07:56.315 ************************************ 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=76859 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:56.315 Process bdevio pid: 76859 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 76859' 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 76859 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 76859 ']' 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.315 09:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.574 [2024-07-24 09:33:34.184182] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:56.574 [2024-07-24 09:33:34.184334] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76859 ] 00:07:56.574 [2024-07-24 09:33:34.358261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.833 [2024-07-24 09:33:34.409407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.833 [2024-07-24 09:33:34.409362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.833 [2024-07-24 09:33:34.409654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.400 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.400 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:57.400 09:33:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:57.400 I/O targets: 00:07:57.400 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:57.400 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:57.400 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.400 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.400 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.400 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:57.400 00:07:57.400 00:07:57.400 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.400 http://cunit.sourceforge.net/ 00:07:57.400 00:07:57.400 00:07:57.400 Suite: bdevio tests on: Nvme3n1 00:07:57.400 Test: blockdev write read block ...passed 00:07:57.400 Test: blockdev write zeroes read block ...passed 00:07:57.400 Test: blockdev write zeroes read no split ...passed 00:07:57.400 Test: blockdev write zeroes read split ...passed 00:07:57.400 Test: blockdev write zeroes read split partial ...passed 00:07:57.400 Test: blockdev reset ...[2024-07-24 09:33:35.198696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:57.400 passed 00:07:57.400 Test: blockdev write read 8 blocks ...[2024-07-24 09:33:35.202482] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.400 passed 00:07:57.400 Test: blockdev write read size > 128k ...passed 00:07:57.400 Test: blockdev write read invalid size ...passed 00:07:57.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.400 Test: blockdev write read max offset ...passed 00:07:57.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.400 Test: blockdev writev readv 8 blocks ...passed 00:07:57.400 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.400 Test: blockdev writev readv block ...passed 00:07:57.400 Test: blockdev writev readv size > 128k ...passed 00:07:57.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.400 Test: blockdev comparev and writev ...[2024-07-24 09:33:35.208910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7604000 len:0x1000 00:07:57.400 [2024-07-24 09:33:35.208967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.400 passed 00:07:57.400 Test: blockdev nvme passthru rw ...passed 00:07:57.400 Test: blockdev nvme passthru vendor specific ...passed 00:07:57.400 Test: blockdev nvme admin passthru ...[2024-07-24 09:33:35.209818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.400 [2024-07-24 09:33:35.209868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev copy ...passed 00:07:57.659 Suite: bdevio tests on: Nvme2n3 00:07:57.659 Test: blockdev write read block ...passed 00:07:57.659 Test: blockdev write zeroes read block ...passed 00:07:57.659 Test: blockdev write zeroes read no split ...passed 00:07:57.659 Test: blockdev write zeroes read split ...passed 00:07:57.659 Test: blockdev write zeroes read split partial ...passed 00:07:57.659 Test: blockdev reset ...[2024-07-24 09:33:35.234704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:57.659 [2024-07-24 09:33:35.237291] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.659 passed 00:07:57.659 Test: blockdev write read 8 blocks ...passed 00:07:57.659 Test: blockdev write read size > 128k ...passed 00:07:57.659 Test: blockdev write read invalid size ...passed 00:07:57.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.659 Test: blockdev write read max offset ...passed 00:07:57.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.659 Test: blockdev writev readv 8 blocks ...passed 00:07:57.659 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.659 Test: blockdev writev readv block ...passed 00:07:57.659 Test: blockdev writev readv size > 128k ...passed 00:07:57.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.659 Test: blockdev comparev and writev ...[2024-07-24 09:33:35.244806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7602000 len:0x1000 00:07:57.659 [2024-07-24 09:33:35.244862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev nvme passthru rw ...passed 00:07:57.659 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:33:35.245816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.659 [2024-07-24 09:33:35.245856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev nvme admin passthru ...passed 00:07:57.659 Test: blockdev copy ...passed 00:07:57.659 Suite: bdevio tests on: Nvme2n2 00:07:57.659 Test: blockdev write read block ...passed 00:07:57.659 Test: blockdev write zeroes read block ...passed 00:07:57.659 Test: blockdev write zeroes read no split ...passed 00:07:57.659 Test: blockdev write zeroes read split ...passed 00:07:57.659 Test: blockdev write zeroes read split partial ...passed 00:07:57.659 Test: blockdev reset ...[2024-07-24 09:33:35.273784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:57.659 [2024-07-24 09:33:35.276278] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.659 passed 00:07:57.659 Test: blockdev write read 8 blocks ...passed 00:07:57.659 Test: blockdev write read size > 128k ...passed 00:07:57.659 Test: blockdev write read invalid size ...passed 00:07:57.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.659 Test: blockdev write read max offset ...passed 00:07:57.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.659 Test: blockdev writev readv 8 blocks ...passed 00:07:57.659 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.659 Test: blockdev writev readv block ...passed 00:07:57.659 Test: blockdev writev readv size > 128k ...passed 00:07:57.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.659 Test: blockdev comparev and writev ...[2024-07-24 09:33:35.284091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d760c000 len:0x1000 00:07:57.659 [2024-07-24 09:33:35.284142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev nvme passthru rw ...passed 00:07:57.659 Test: blockdev nvme passthru vendor specific ...passed 00:07:57.659 Test: blockdev nvme admin passthru ...[2024-07-24 09:33:35.285135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.659 [2024-07-24 09:33:35.285181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev copy ...passed 00:07:57.659 Suite: bdevio tests on: Nvme2n1 00:07:57.659 Test: blockdev write read block ...passed 00:07:57.659 Test: blockdev write zeroes read block ...passed 00:07:57.659 Test: blockdev write zeroes read no split ...passed 00:07:57.659 Test: blockdev write zeroes read split ...passed 00:07:57.659 Test: blockdev write zeroes read split partial ...passed 00:07:57.659 Test: blockdev reset ...[2024-07-24 09:33:35.315506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:57.659 [2024-07-24 09:33:35.317971] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.659 passed 00:07:57.659 Test: blockdev write read 8 blocks ...passed 00:07:57.659 Test: blockdev write read size > 128k ...passed 00:07:57.659 Test: blockdev write read invalid size ...passed 00:07:57.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.659 Test: blockdev write read max offset ...passed 00:07:57.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.659 Test: blockdev writev readv 8 blocks ...passed 00:07:57.659 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.659 Test: blockdev writev readv block ...passed 00:07:57.659 Test: blockdev writev readv size > 128k ...passed 00:07:57.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.659 Test: blockdev comparev and writev ...[2024-07-24 09:33:35.324497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a36000 len:0x1000 00:07:57.659 [2024-07-24 09:33:35.324558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev nvme passthru rw ...passed 00:07:57.659 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:33:35.325301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.659 [2024-07-24 09:33:35.325343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.659 passed 00:07:57.659 Test: blockdev nvme admin passthru ...passed 00:07:57.659 Test: blockdev copy ...passed 00:07:57.659 Suite: bdevio tests on: Nvme1n1 00:07:57.659 Test: blockdev write read block ...passed 00:07:57.659 Test: blockdev write zeroes read block ...passed 00:07:57.659 Test: blockdev write zeroes read no split ...passed 00:07:57.659 Test: blockdev write zeroes read split ...passed 00:07:57.659 Test: blockdev write zeroes read split partial ...passed 00:07:57.659 Test: blockdev reset ...[2024-07-24 09:33:35.345021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:57.659 [2024-07-24 09:33:35.347699] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.659 passed 00:07:57.659 Test: blockdev write read 8 blocks ...passed 00:07:57.660 Test: blockdev write read size > 128k ...passed 00:07:57.660 Test: blockdev write read invalid size ...passed 00:07:57.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.660 Test: blockdev write read max offset ...passed 00:07:57.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.660 Test: blockdev writev readv 8 blocks ...passed 00:07:57.660 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.660 Test: blockdev writev readv block ...passed 00:07:57.660 Test: blockdev writev readv size > 128k ...passed 00:07:57.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.660 Test: blockdev comparev and writev ...[2024-07-24 09:33:35.355649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a32000 len:0x1000 00:07:57.660 [2024-07-24 09:33:35.355705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.660 passed 00:07:57.660 Test: blockdev nvme passthru rw ...passed 00:07:57.660 Test: blockdev nvme passthru vendor specific ...passed 00:07:57.660 Test: blockdev nvme admin passthru ...[2024-07-24 09:33:35.356864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.660 [2024-07-24 09:33:35.356908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.660 passed 00:07:57.660 Test: blockdev copy ...passed 00:07:57.660 Suite: bdevio tests on: Nvme0n1 00:07:57.660 Test: blockdev write read block ...passed 00:07:57.660 Test: blockdev write zeroes read block ...passed 00:07:57.660 Test: blockdev write zeroes read no split ...passed 00:07:57.660 Test: blockdev write zeroes read split ...passed 00:07:57.660 Test: blockdev write zeroes read split partial ...passed 00:07:57.660 Test: blockdev reset ...[2024-07-24 09:33:35.384702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:57.660 [2024-07-24 09:33:35.386937] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.660 passed 00:07:57.660 Test: blockdev write read 8 blocks ...passed 00:07:57.660 Test: blockdev write read size > 128k ...passed 00:07:57.660 Test: blockdev write read invalid size ...passed 00:07:57.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.660 Test: blockdev write read max offset ...passed 00:07:57.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.660 Test: blockdev writev readv 8 blocks ...passed 00:07:57.660 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.660 Test: blockdev writev readv block ...passed 00:07:57.660 Test: blockdev writev readv size > 128k ...passed 00:07:57.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.660 Test: blockdev comparev and writev ...passed 00:07:57.660 Test: blockdev nvme passthru rw ...[2024-07-24 09:33:35.394255] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:57.660 separate metadata which is not supported yet. 00:07:57.660 passed 00:07:57.660 Test: blockdev nvme passthru vendor specific ...passed 00:07:57.660 Test: blockdev nvme admin passthru ...[2024-07-24 09:33:35.395239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:57.660 [2024-07-24 09:33:35.395292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:57.660 passed 00:07:57.660 Test: blockdev copy ...passed 00:07:57.660 00:07:57.660 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.660 suites 6 6 n/a 0 0 00:07:57.660 tests 138 138 138 0 0 00:07:57.660 asserts 893 893 893 0 n/a 00:07:57.660 00:07:57.660 Elapsed time = 0.509 seconds 00:07:57.660 0 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 76859 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 76859 ']' 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 76859 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76859 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76859' 00:07:57.660 killing process with pid 76859 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 76859 00:07:57.660 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 76859 00:07:57.919 09:33:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:57.919 00:07:57.919 real 0m1.596s 00:07:57.919 user 0m3.861s 00:07:57.919 sys 0m0.420s 00:07:57.919 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.919 09:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.919 ************************************ 00:07:57.919 END TEST bdev_bounds 00:07:57.919 ************************************ 00:07:58.178 09:33:35 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.178 09:33:35 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.179 09:33:35 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.179 09:33:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.179 ************************************ 00:07:58.179 START TEST bdev_nbd 00:07:58.179 ************************************ 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=76913 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 76913 /var/tmp/spdk-nbd.sock 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 76913 ']' 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.179 09:33:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:58.179 [2024-07-24 09:33:35.869903] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:07:58.179 [2024-07-24 09:33:35.870267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.438 [2024-07-24 09:33:36.041927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.438 [2024-07-24 09:33:36.089742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.007 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.266 09:33:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.266 1+0 records in 00:07:59.266 1+0 records out 00:07:59.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434987 s, 9.4 MB/s 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.266 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:59.524 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.525 1+0 records in 00:07:59.525 1+0 records out 00:07:59.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457611 s, 9.0 MB/s 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.525 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.783 1+0 records in 00:07:59.783 1+0 records out 00:07:59.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555899 s, 7.4 MB/s 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.783 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:00.130 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.131 1+0 records in 00:08:00.131 1+0 records out 00:08:00.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961531 s, 4.3 MB/s 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.131 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.390 1+0 records in 00:08:00.390 1+0 records out 00:08:00.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618863 s, 6.6 MB/s 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.390 09:33:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.390 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.648 1+0 records in 00:08:00.648 1+0 records out 00:08:00.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872921 s, 4.7 MB/s 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.649 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd0", 00:08:00.907 "bdev_name": "Nvme0n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd1", 00:08:00.907 "bdev_name": "Nvme1n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd2", 00:08:00.907 "bdev_name": "Nvme2n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd3", 00:08:00.907 "bdev_name": "Nvme2n2" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd4", 00:08:00.907 "bdev_name": "Nvme2n3" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd5", 00:08:00.907 "bdev_name": "Nvme3n1" 00:08:00.907 } 00:08:00.907 ]' 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd0", 00:08:00.907 "bdev_name": "Nvme0n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd1", 00:08:00.907 "bdev_name": "Nvme1n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd2", 00:08:00.907 "bdev_name": "Nvme2n1" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd3", 00:08:00.907 "bdev_name": "Nvme2n2" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd4", 00:08:00.907 "bdev_name": "Nvme2n3" 00:08:00.907 }, 00:08:00.907 { 00:08:00.907 "nbd_device": "/dev/nbd5", 00:08:00.907 "bdev_name": "Nvme3n1" 00:08:00.907 } 00:08:00.907 ]' 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.907 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.166 09:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.425 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.683 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.941 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.200 09:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.200 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.200 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.200 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:02.460 /dev/nbd0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.460 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.719 1+0 records in 00:08:02.719 1+0 records out 00:08:02.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717047 s, 5.7 MB/s 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:02.719 /dev/nbd1 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.719 1+0 records in 00:08:02.719 1+0 records out 00:08:02.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380052 s, 10.8 MB/s 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.719 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:02.978 /dev/nbd10 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.978 1+0 records in 00:08:02.978 1+0 records out 00:08:02.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759465 s, 5.4 MB/s 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.978 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:03.237 /dev/nbd11 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.237 09:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.237 1+0 records in 00:08:03.237 1+0 records out 00:08:03.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511219 s, 8.0 MB/s 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.237 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:03.497 /dev/nbd12 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.497 1+0 records in 00:08:03.497 1+0 records out 00:08:03.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000948738 s, 4.3 MB/s 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.497 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:03.757 /dev/nbd13 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.757 1+0 records in 00:08:03.757 1+0 records out 00:08:03.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082846 s, 4.9 MB/s 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.757 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd0", 00:08:04.016 "bdev_name": "Nvme0n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd1", 00:08:04.016 "bdev_name": "Nvme1n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd10", 00:08:04.016 "bdev_name": "Nvme2n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd11", 00:08:04.016 "bdev_name": "Nvme2n2" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd12", 00:08:04.016 "bdev_name": "Nvme2n3" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd13", 00:08:04.016 "bdev_name": "Nvme3n1" 00:08:04.016 } 00:08:04.016 ]' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd0", 00:08:04.016 "bdev_name": "Nvme0n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd1", 00:08:04.016 "bdev_name": "Nvme1n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd10", 00:08:04.016 "bdev_name": "Nvme2n1" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd11", 00:08:04.016 "bdev_name": "Nvme2n2" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd12", 00:08:04.016 "bdev_name": "Nvme2n3" 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "nbd_device": "/dev/nbd13", 00:08:04.016 "bdev_name": "Nvme3n1" 00:08:04.016 } 00:08:04.016 ]' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:04.016 /dev/nbd1 00:08:04.016 /dev/nbd10 00:08:04.016 /dev/nbd11 00:08:04.016 /dev/nbd12 00:08:04.016 /dev/nbd13' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:04.016 /dev/nbd1 00:08:04.016 /dev/nbd10 00:08:04.016 /dev/nbd11 00:08:04.016 /dev/nbd12 00:08:04.016 /dev/nbd13' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:04.016 256+0 records in 00:08:04.016 256+0 records out 00:08:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399486 s, 262 MB/s 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.016 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:04.274 256+0 records in 00:08:04.274 256+0 records out 00:08:04.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127861 s, 8.2 MB/s 00:08:04.274 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.274 09:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.274 256+0 records in 00:08:04.274 256+0 records out 00:08:04.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12373 s, 8.5 MB/s 00:08:04.274 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.274 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:04.533 256+0 records in 00:08:04.533 256+0 records out 00:08:04.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128067 s, 8.2 MB/s 00:08:04.533 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.533 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:04.533 256+0 records in 00:08:04.533 256+0 records out 00:08:04.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129467 s, 8.1 MB/s 00:08:04.533 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.533 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:04.791 256+0 records in 00:08:04.791 256+0 records out 00:08:04.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123815 s, 8.5 MB/s 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.791 256+0 records in 00:08:04.791 256+0 records out 00:08:04.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131038 s, 8.0 MB/s 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.791 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.074 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.332 09:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.589 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.589 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.589 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.590 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:05.857 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.114 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:06.115 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.372 09:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.372 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:06.630 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.889 malloc_lvol_verify 00:08:06.889 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:07.147 72ea9919-a347-43f7-9646-00c23d98c486 00:08:07.147 09:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:07.405 968549c8-12d2-44d5-ad06-8a1ed6c25234 00:08:07.405 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:07.663 /dev/nbd0 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:07.663 mke2fs 1.46.5 (30-Dec-2021) 00:08:07.663 Discarding device blocks: 0/4096 done 00:08:07.663 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:07.663 00:08:07.663 Allocating group tables: 0/1 done 00:08:07.663 Writing inode tables: 0/1 done 00:08:07.663 Creating journal (1024 blocks): done 00:08:07.663 Writing superblocks and filesystem accounting information: 0/1 done 00:08:07.663 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.663 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 76913 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 76913 ']' 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 76913 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76913 00:08:07.922 killing process with pid 76913 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76913' 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 76913 00:08:07.922 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 76913 00:08:08.180 09:33:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:08.180 00:08:08.180 real 0m10.209s 00:08:08.180 user 0m13.742s 00:08:08.180 sys 0m4.544s 00:08:08.180 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.180 ************************************ 00:08:08.180 END TEST bdev_nbd 00:08:08.180 ************************************ 00:08:08.180 09:33:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:08.439 09:33:46 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:08.439 09:33:46 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:08.439 skipping fio tests on NVMe due to multi-ns failures. 00:08:08.439 09:33:46 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:08.439 09:33:46 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:08.439 09:33:46 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.439 09:33:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:08.439 09:33:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.439 09:33:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.439 ************************************ 00:08:08.439 START TEST bdev_verify 00:08:08.439 ************************************ 00:08:08.439 09:33:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.439 [2024-07-24 09:33:46.114964] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:08.439 [2024-07-24 09:33:46.115110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77288 ] 00:08:08.698 [2024-07-24 09:33:46.285812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.698 [2024-07-24 09:33:46.328289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.698 [2024-07-24 09:33:46.328390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.956 Running I/O for 5 seconds... 00:08:14.225 00:08:14.225 Latency(us) 00:08:14.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.225 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0xbd0bd 00:08:14.225 Nvme0n1 : 5.03 1856.31 7.25 0.00 0.00 68725.64 15581.25 76642.90 00:08:14.225 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:14.225 Nvme0n1 : 5.04 1853.09 7.24 0.00 0.00 68854.95 15054.86 81696.28 00:08:14.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0xa0000 00:08:14.225 Nvme1n1 : 5.05 1861.63 7.27 0.00 0.00 68417.08 6185.12 70326.18 00:08:14.225 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0xa0000 length 0xa0000 00:08:14.225 Nvme1n1 : 5.04 1852.61 7.24 0.00 0.00 68760.95 16002.36 75379.56 00:08:14.225 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0x80000 00:08:14.225 Nvme2n1 : 5.05 1861.16 7.27 0.00 0.00 68350.11 6264.08 64430.57 00:08:14.225 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x80000 length 0x80000 00:08:14.225 Nvme2n1 : 5.07 1867.71 7.30 0.00 0.00 67980.56 6553.60 61061.65 00:08:14.225 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0x80000 00:08:14.225 Nvme2n2 : 5.07 1869.80 7.30 0.00 0.00 67902.73 7895.90 63588.34 00:08:14.225 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x80000 length 0x80000 00:08:14.225 Nvme2n2 : 5.07 1867.30 7.29 0.00 0.00 67832.64 6553.60 61482.77 00:08:14.225 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0x80000 00:08:14.225 Nvme2n3 : 5.07 1869.30 7.30 0.00 0.00 67797.04 8159.10 64851.69 00:08:14.225 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x80000 length 0x80000 00:08:14.225 Nvme2n3 : 5.07 1866.90 7.29 0.00 0.00 67740.65 6632.56 64009.46 00:08:14.225 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x0 length 0x20000 00:08:14.225 Nvme3n1 : 5.07 1868.87 7.30 0.00 0.00 67707.95 7580.07 66957.26 00:08:14.225 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.225 Verification LBA range: start 0x20000 length 0x20000 00:08:14.225 Nvme3n1 : 5.07 1866.50 7.29 0.00 0.00 67653.70 6606.24 63167.23 00:08:14.225 =================================================================================================================== 00:08:14.225 Total : 22361.18 87.35 0.00 0.00 68141.23 6185.12 81696.28 00:08:14.794 00:08:14.794 real 0m6.332s 00:08:14.794 user 0m11.802s 00:08:14.794 sys 0m0.268s 00:08:14.794 09:33:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.794 ************************************ 00:08:14.794 END TEST bdev_verify 00:08:14.794 ************************************ 00:08:14.794 09:33:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:14.794 09:33:52 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:14.794 09:33:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:14.794 09:33:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.794 09:33:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.794 ************************************ 00:08:14.794 START TEST bdev_verify_big_io 00:08:14.794 ************************************ 00:08:14.794 09:33:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:14.794 [2024-07-24 09:33:52.545682] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:14.794 [2024-07-24 09:33:52.545838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77375 ] 00:08:15.053 [2024-07-24 09:33:52.715809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.053 [2024-07-24 09:33:52.759320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.053 [2024-07-24 09:33:52.759416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.623 Running I/O for 5 seconds... 00:08:22.204 00:08:22.204 Latency(us) 00:08:22.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0xbd0b 00:08:22.204 Nvme0n1 : 5.50 162.88 10.18 0.00 0.00 762233.05 22740.20 795064.85 00:08:22.204 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:22.204 Nvme0n1 : 5.52 162.43 10.15 0.00 0.00 768120.67 31794.17 774851.34 00:08:22.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0xa000 00:08:22.204 Nvme1n1 : 5.56 166.25 10.39 0.00 0.00 730533.56 54323.82 727686.48 00:08:22.204 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0xa000 length 0xa000 00:08:22.204 Nvme1n1 : 5.52 162.36 10.15 0.00 0.00 750073.95 83801.86 717579.72 00:08:22.204 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0x8000 00:08:22.204 Nvme2n1 : 5.61 171.17 10.70 0.00 0.00 696527.93 47164.86 747899.99 00:08:22.204 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x8000 length 0x8000 00:08:22.204 Nvme2n1 : 5.60 163.99 10.25 0.00 0.00 723353.67 82538.51 734424.31 00:08:22.204 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0x8000 00:08:22.204 Nvme2n2 : 5.61 171.10 10.69 0.00 0.00 679929.86 48007.09 764744.58 00:08:22.204 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x8000 length 0x8000 00:08:22.204 Nvme2n2 : 5.66 169.63 10.60 0.00 0.00 687575.62 47164.86 751268.91 00:08:22.204 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0x8000 00:08:22.204 Nvme2n3 : 5.67 180.70 11.29 0.00 0.00 631020.11 26635.51 788327.02 00:08:22.204 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x8000 length 0x8000 00:08:22.204 Nvme2n3 : 5.71 175.48 10.97 0.00 0.00 652016.09 41479.81 771482.42 00:08:22.204 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x0 length 0x2000 00:08:22.204 Nvme3n1 : 5.71 196.50 12.28 0.00 0.00 566897.44 3632.12 805171.61 00:08:22.204 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.204 Verification LBA range: start 0x2000 length 0x2000 00:08:22.204 Nvme3n1 : 5.72 183.79 11.49 0.00 0.00 608924.34 2566.17 791695.94 00:08:22.204 =================================================================================================================== 00:08:22.204 Total : 2066.28 129.14 0.00 0.00 683896.07 2566.17 805171.61 00:08:22.204 00:08:22.204 real 0m7.265s 00:08:22.204 user 0m13.600s 00:08:22.204 sys 0m0.299s 00:08:22.204 09:33:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.204 09:33:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:22.204 ************************************ 00:08:22.204 END TEST bdev_verify_big_io 00:08:22.204 ************************************ 00:08:22.204 09:33:59 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.204 09:33:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:22.204 09:33:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.205 09:33:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.205 ************************************ 00:08:22.205 START TEST bdev_write_zeroes 00:08:22.205 ************************************ 00:08:22.205 09:33:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.205 [2024-07-24 09:33:59.851373] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:22.205 [2024-07-24 09:33:59.851579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77473 ] 00:08:22.205 [2024-07-24 09:34:00.019186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.462 [2024-07-24 09:34:00.087473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.720 Running I/O for 1 seconds... 00:08:24.091 00:08:24.091 Latency(us) 00:08:24.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.092 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme0n1 : 1.01 9153.77 35.76 0.00 0.00 13925.76 9527.72 26530.24 00:08:24.092 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme1n1 : 1.01 9143.41 35.72 0.00 0.00 13938.90 9738.28 26530.24 00:08:24.092 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme2n1 : 1.02 9181.51 35.87 0.00 0.00 13862.87 6895.76 28214.70 00:08:24.092 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme2n2 : 1.02 9171.25 35.83 0.00 0.00 13803.13 7264.23 28846.37 00:08:24.092 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme2n3 : 1.02 9161.21 35.79 0.00 0.00 13799.15 7422.15 29267.48 00:08:24.092 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.092 Nvme3n1 : 1.02 9088.25 35.50 0.00 0.00 13893.08 7685.35 29478.04 00:08:24.092 =================================================================================================================== 00:08:24.092 Total : 54899.39 214.45 0.00 0.00 13870.32 6895.76 29478.04 00:08:24.092 00:08:24.092 real 0m2.041s 00:08:24.092 user 0m1.653s 00:08:24.092 sys 0m0.269s 00:08:24.092 09:34:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.092 09:34:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:24.092 ************************************ 00:08:24.092 END TEST bdev_write_zeroes 00:08:24.092 ************************************ 00:08:24.092 09:34:01 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.092 09:34:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:24.092 09:34:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.092 09:34:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.092 ************************************ 00:08:24.092 START TEST bdev_json_nonenclosed 00:08:24.092 ************************************ 00:08:24.092 09:34:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.351 [2024-07-24 09:34:01.968710] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:24.351 [2024-07-24 09:34:01.968834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77515 ] 00:08:24.351 [2024-07-24 09:34:02.137483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.610 [2024-07-24 09:34:02.180679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.610 [2024-07-24 09:34:02.180801] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:24.610 [2024-07-24 09:34:02.180839] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:24.610 [2024-07-24 09:34:02.180864] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.610 00:08:24.610 real 0m0.401s 00:08:24.610 user 0m0.172s 00:08:24.610 sys 0m0.126s 00:08:24.610 ************************************ 00:08:24.610 END TEST bdev_json_nonenclosed 00:08:24.610 ************************************ 00:08:24.610 09:34:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.610 09:34:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 09:34:02 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.610 09:34:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:24.610 09:34:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.610 09:34:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 ************************************ 00:08:24.610 START TEST bdev_json_nonarray 00:08:24.610 ************************************ 00:08:24.610 09:34:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.869 [2024-07-24 09:34:02.434063] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:24.869 [2024-07-24 09:34:02.434218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77540 ] 00:08:24.869 [2024-07-24 09:34:02.603051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.869 [2024-07-24 09:34:02.645351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.869 [2024-07-24 09:34:02.645473] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:24.869 [2024-07-24 09:34:02.645506] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:24.869 [2024-07-24 09:34:02.645520] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.128 00:08:25.128 real 0m0.416s 00:08:25.128 user 0m0.177s 00:08:25.128 sys 0m0.135s 00:08:25.128 09:34:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.128 ************************************ 00:08:25.128 END TEST bdev_json_nonarray 00:08:25.128 ************************************ 00:08:25.128 09:34:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:25.128 09:34:02 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:25.128 00:08:25.128 real 0m31.847s 00:08:25.128 user 0m47.885s 00:08:25.128 sys 0m7.322s 00:08:25.128 09:34:02 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.128 ************************************ 00:08:25.128 END TEST blockdev_nvme 00:08:25.128 09:34:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 ************************************ 00:08:25.128 09:34:02 -- spdk/autotest.sh@217 -- # uname -s 00:08:25.128 09:34:02 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:08:25.128 09:34:02 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.128 09:34:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.128 09:34:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.128 09:34:02 -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 ************************************ 00:08:25.128 START TEST blockdev_nvme_gpt 00:08:25.129 ************************************ 00:08:25.129 09:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.388 * Looking for test storage... 00:08:25.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77611 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:25.388 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 77611 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 77611 ']' 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.388 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:25.388 [2024-07-24 09:34:03.170130] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:25.388 [2024-07-24 09:34:03.170277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77611 ] 00:08:25.647 [2024-07-24 09:34:03.334720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.647 [2024-07-24 09:34:03.376505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.216 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.216 09:34:03 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:08:26.216 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:26.216 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:26.216 09:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:26.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.042 Waiting for block devices as requested 00:08:27.042 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.301 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.301 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.560 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.829 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:32.829 BYT; 00:08:32.829 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:32.829 BYT; 00:08:32.829 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.829 09:34:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.829 09:34:10 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:33.786 The operation has completed successfully. 00:08:33.786 09:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:34.722 The operation has completed successfully. 00:08:34.722 09:34:12 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.223 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.223 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.223 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.223 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.223 09:34:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:36.223 09:34:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.223 09:34:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.223 [] 00:08:36.223 09:34:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.223 09:34:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:36.223 09:34:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:36.223 09:34:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:36.223 09:34:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:36.482 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:36.482 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.482 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.741 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.741 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.001 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.001 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:37.001 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:37.002 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e3d109ed-c8d3-435b-a3e8-81cd26e9cb9b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e3d109ed-c8d3-435b-a3e8-81cd26e9cb9b",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "116fd9f1-ab93-46dc-b445-2fdb83eb3a5b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "116fd9f1-ab93-46dc-b445-2fdb83eb3a5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0231c717-425b-48ac-9e25-dacad180bd0d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0231c717-425b-48ac-9e25-dacad180bd0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "55cf5a4e-11b9-40c5-bb7a-d843495425c8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "55cf5a4e-11b9-40c5-bb7a-d843495425c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "00e096ed-fe93-4d90-8a02-eead22b7d4a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "00e096ed-fe93-4d90-8a02-eead22b7d4a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.002 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.002 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:37.002 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:37.002 09:34:14 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 77611 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 77611 ']' 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 77611 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77611 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.002 killing process with pid 77611 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77611' 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 77611 00:08:37.002 09:34:14 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 77611 00:08:37.261 09:34:15 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:37.261 09:34:15 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:37.261 09:34:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:37.261 09:34:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.261 09:34:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.261 ************************************ 00:08:37.261 START TEST bdev_hello_world 00:08:37.261 ************************************ 00:08:37.261 09:34:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:37.520 [2024-07-24 09:34:15.125121] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:37.520 [2024-07-24 09:34:15.125271] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78237 ] 00:08:37.520 [2024-07-24 09:34:15.297286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.778 [2024-07-24 09:34:15.343978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.037 [2024-07-24 09:34:15.735891] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:38.037 [2024-07-24 09:34:15.735948] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:38.037 [2024-07-24 09:34:15.735984] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:38.037 [2024-07-24 09:34:15.738471] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:38.037 [2024-07-24 09:34:15.739032] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:38.037 [2024-07-24 09:34:15.739070] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:38.037 [2024-07-24 09:34:15.739347] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:38.037 00:08:38.037 [2024-07-24 09:34:15.739378] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:38.296 00:08:38.296 real 0m0.934s 00:08:38.296 user 0m0.611s 00:08:38.296 sys 0m0.217s 00:08:38.296 09:34:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.296 09:34:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:38.296 ************************************ 00:08:38.296 END TEST bdev_hello_world 00:08:38.296 ************************************ 00:08:38.296 09:34:16 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:38.296 09:34:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.296 09:34:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.296 09:34:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.296 ************************************ 00:08:38.296 START TEST bdev_bounds 00:08:38.296 ************************************ 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=78264 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:38.296 Process bdevio pid: 78264 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 78264' 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 78264 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 78264 ']' 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.296 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:38.555 [2024-07-24 09:34:16.141225] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:38.555 [2024-07-24 09:34:16.141395] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78264 ] 00:08:38.555 [2024-07-24 09:34:16.313991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.555 [2024-07-24 09:34:16.363353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.555 [2024-07-24 09:34:16.363385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.555 [2024-07-24 09:34:16.363489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.492 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.492 09:34:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:39.492 09:34:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:39.492 I/O targets: 00:08:39.492 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:39.492 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:39.492 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:39.492 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.492 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.492 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.492 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:39.492 00:08:39.492 00:08:39.492 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.492 http://cunit.sourceforge.net/ 00:08:39.492 00:08:39.492 00:08:39.492 Suite: bdevio tests on: Nvme3n1 00:08:39.492 Test: blockdev write read block ...passed 00:08:39.492 Test: blockdev write zeroes read block ...passed 00:08:39.492 Test: blockdev write zeroes read no split ...passed 00:08:39.492 Test: blockdev write zeroes read split ...passed 00:08:39.492 Test: blockdev write zeroes read split partial ...passed 00:08:39.492 Test: blockdev reset ...[2024-07-24 09:34:17.100833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:39.492 [2024-07-24 09:34:17.102812] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.492 passed 00:08:39.492 Test: blockdev write read 8 blocks ...passed 00:08:39.492 Test: blockdev write read size > 128k ...passed 00:08:39.492 Test: blockdev write read invalid size ...passed 00:08:39.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.492 Test: blockdev write read max offset ...passed 00:08:39.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.492 Test: blockdev writev readv 8 blocks ...passed 00:08:39.492 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.493 Test: blockdev writev readv block ...passed 00:08:39.493 Test: blockdev writev readv size > 128k ...passed 00:08:39.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.493 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.108468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a0e000 len:0x1000 00:08:39.493 [2024-07-24 09:34:17.108519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru rw ...passed 00:08:39.493 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:34:17.109564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:39.493 [2024-07-24 09:34:17.109603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme admin passthru ...passed 00:08:39.493 Test: blockdev copy ...passed 00:08:39.493 Suite: bdevio tests on: Nvme2n3 00:08:39.493 Test: blockdev write read block ...passed 00:08:39.493 Test: blockdev write zeroes read block ...passed 00:08:39.493 Test: blockdev write zeroes read no split ...passed 00:08:39.493 Test: blockdev write zeroes read split ...passed 00:08:39.493 Test: blockdev write zeroes read split partial ...passed 00:08:39.493 Test: blockdev reset ...[2024-07-24 09:34:17.130688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:39.493 [2024-07-24 09:34:17.132857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.493 passed 00:08:39.493 Test: blockdev write read 8 blocks ...passed 00:08:39.493 Test: blockdev write read size > 128k ...passed 00:08:39.493 Test: blockdev write read invalid size ...passed 00:08:39.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.493 Test: blockdev write read max offset ...passed 00:08:39.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.493 Test: blockdev writev readv 8 blocks ...passed 00:08:39.493 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.493 Test: blockdev writev readv block ...passed 00:08:39.493 Test: blockdev writev readv size > 128k ...passed 00:08:39.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.493 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.140240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a0a000 len:0x1000 00:08:39.493 [2024-07-24 09:34:17.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru rw ...passed 00:08:39.493 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:34:17.140977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme admin passthru ...[2024-07-24 09:34:17.141023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev copy ...passed 00:08:39.493 Suite: bdevio tests on: Nvme2n2 00:08:39.493 Test: blockdev write read block ...passed 00:08:39.493 Test: blockdev write zeroes read block ...passed 00:08:39.493 Test: blockdev write zeroes read no split ...passed 00:08:39.493 Test: blockdev write zeroes read split ...passed 00:08:39.493 Test: blockdev write zeroes read split partial ...passed 00:08:39.493 Test: blockdev reset ...[2024-07-24 09:34:17.177202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:39.493 [2024-07-24 09:34:17.179384] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.493 passed 00:08:39.493 Test: blockdev write read 8 blocks ...passed 00:08:39.493 Test: blockdev write read size > 128k ...passed 00:08:39.493 Test: blockdev write read invalid size ...passed 00:08:39.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.493 Test: blockdev write read max offset ...passed 00:08:39.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.493 Test: blockdev writev readv 8 blocks ...passed 00:08:39.493 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.493 Test: blockdev writev readv block ...passed 00:08:39.493 Test: blockdev writev readv size > 128k ...passed 00:08:39.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.493 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.191091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a06000 len:0x1000 00:08:39.493 [2024-07-24 09:34:17.191148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:08:39.493 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:34:17.191881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme admin passthru ...[2024-07-24 09:34:17.191930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev copy ...passed 00:08:39.493 Suite: bdevio tests on: Nvme2n1 00:08:39.493 Test: blockdev write read block ...passed 00:08:39.493 Test: blockdev write zeroes read block ...passed 00:08:39.493 Test: blockdev write zeroes read no split ...passed 00:08:39.493 Test: blockdev write zeroes read split ...passed 00:08:39.493 Test: blockdev write zeroes read split partial ...passed 00:08:39.493 Test: blockdev reset ...[2024-07-24 09:34:17.209637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:39.493 [2024-07-24 09:34:17.211812] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.493 passed 00:08:39.493 Test: blockdev write read 8 blocks ...passed 00:08:39.493 Test: blockdev write read size > 128k ...passed 00:08:39.493 Test: blockdev write read invalid size ...passed 00:08:39.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.493 Test: blockdev write read max offset ...passed 00:08:39.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.493 Test: blockdev writev readv 8 blocks ...passed 00:08:39.493 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.493 Test: blockdev writev readv block ...passed 00:08:39.493 Test: blockdev writev readv size > 128k ...passed 00:08:39.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.493 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.216839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a0d000 len:0x1000 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru rw ...[2024-07-24 09:34:17.216894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:34:17.217486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme admin passthru ...[2024-07-24 09:34:17.217533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev copy ...passed 00:08:39.493 Suite: bdevio tests on: Nvme1n1p2 00:08:39.493 Test: blockdev write read block ...passed 00:08:39.493 Test: blockdev write zeroes read block ...passed 00:08:39.493 Test: blockdev write zeroes read no split ...passed 00:08:39.493 Test: blockdev write zeroes read split ...passed 00:08:39.493 Test: blockdev write zeroes read split partial ...passed 00:08:39.493 Test: blockdev reset ...[2024-07-24 09:34:17.234358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:39.493 [2024-07-24 09:34:17.236268] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.493 passed 00:08:39.493 Test: blockdev write read 8 blocks ...passed 00:08:39.493 Test: blockdev write read size > 128k ...passed 00:08:39.493 Test: blockdev write read invalid size ...passed 00:08:39.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.493 Test: blockdev write read max offset ...passed 00:08:39.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.493 Test: blockdev writev readv 8 blocks ...passed 00:08:39.493 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.493 Test: blockdev writev readv block ...passed 00:08:39.493 Test: blockdev writev readv size > 128k ...passed 00:08:39.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.493 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.242492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bdc02000 len:0x1000 00:08:39.493 [2024-07-24 09:34:17.242549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.493 passed 00:08:39.493 Test: blockdev nvme passthru rw ...passed 00:08:39.493 Test: blockdev nvme passthru vendor specific ...passed 00:08:39.493 Test: blockdev nvme admin passthru ...passed 00:08:39.493 Test: blockdev copy ...passed 00:08:39.493 Suite: bdevio tests on: Nvme1n1p1 00:08:39.493 Test: blockdev write read block ...passed 00:08:39.493 Test: blockdev write zeroes read block ...passed 00:08:39.493 Test: blockdev write zeroes read no split ...passed 00:08:39.493 Test: blockdev write zeroes read split ...passed 00:08:39.493 Test: blockdev write zeroes read split partial ...passed 00:08:39.493 Test: blockdev reset ...[2024-07-24 09:34:17.256619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:39.493 [2024-07-24 09:34:17.258424] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.493 passed 00:08:39.493 Test: blockdev write read 8 blocks ...passed 00:08:39.493 Test: blockdev write read size > 128k ...passed 00:08:39.493 Test: blockdev write read invalid size ...passed 00:08:39.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.494 Test: blockdev write read max offset ...passed 00:08:39.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.494 Test: blockdev writev readv 8 blocks ...passed 00:08:39.494 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.494 Test: blockdev writev readv block ...passed 00:08:39.494 Test: blockdev writev readv size > 128k ...passed 00:08:39.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.494 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.263465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d5c3b000 len:0x1000 00:08:39.494 passed 00:08:39.494 Test: blockdev nvme passthru rw ...passed 00:08:39.494 Test: blockdev nvme passthru vendor specific ...passed 00:08:39.494 Test: blockdev nvme admin passthru ...passed 00:08:39.494 Test: blockdev copy ...[2024-07-24 09:34:17.263520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:39.494 passed 00:08:39.494 Suite: bdevio tests on: Nvme0n1 00:08:39.494 Test: blockdev write read block ...passed 00:08:39.494 Test: blockdev write zeroes read block ...passed 00:08:39.494 Test: blockdev write zeroes read no split ...passed 00:08:39.494 Test: blockdev write zeroes read split ...passed 00:08:39.494 Test: blockdev write zeroes read split partial ...passed 00:08:39.494 Test: blockdev reset ...[2024-07-24 09:34:17.278426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:39.494 [2024-07-24 09:34:17.280260] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.494 passed 00:08:39.494 Test: blockdev write read 8 blocks ...passed 00:08:39.494 Test: blockdev write read size > 128k ...passed 00:08:39.494 Test: blockdev write read invalid size ...passed 00:08:39.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:39.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:39.494 Test: blockdev write read max offset ...passed 00:08:39.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:39.494 Test: blockdev writev readv 8 blocks ...passed 00:08:39.494 Test: blockdev writev readv 30 x 1block ...passed 00:08:39.494 Test: blockdev writev readv block ...passed 00:08:39.494 Test: blockdev writev readv size > 128k ...passed 00:08:39.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:39.494 Test: blockdev comparev and writev ...[2024-07-24 09:34:17.284999] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:39.494 separate metadata which is not supported yet. 00:08:39.494 passed 00:08:39.494 Test: blockdev nvme passthru rw ...passed 00:08:39.494 Test: blockdev nvme passthru vendor specific ...passed 00:08:39.494 Test: blockdev nvme admin passthru ...[2024-07-24 09:34:17.285528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:39.494 [2024-07-24 09:34:17.285583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:39.494 passed 00:08:39.494 Test: blockdev copy ...passed 00:08:39.494 00:08:39.494 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.494 suites 7 7 n/a 0 0 00:08:39.494 tests 161 161 161 0 0 00:08:39.494 asserts 1025 1025 1025 0 n/a 00:08:39.494 00:08:39.494 Elapsed time = 0.469 seconds 00:08:39.494 0 00:08:39.494 09:34:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 78264 00:08:39.494 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 78264 ']' 00:08:39.494 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 78264 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78264 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.752 killing process with pid 78264 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78264' 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 78264 00:08:39.752 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 78264 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:40.011 00:08:40.011 real 0m1.527s 00:08:40.011 user 0m3.636s 00:08:40.011 sys 0m0.393s 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:40.011 ************************************ 00:08:40.011 END TEST bdev_bounds 00:08:40.011 ************************************ 00:08:40.011 09:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.011 09:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:40.011 09:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.011 09:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.011 ************************************ 00:08:40.011 START TEST bdev_nbd 00:08:40.011 ************************************ 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=78312 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:40.011 09:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 78312 /var/tmp/spdk-nbd.sock 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 78312 ']' 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.012 09:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 [2024-07-24 09:34:17.745504] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:40.012 [2024-07-24 09:34:17.745675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.270 [2024-07-24 09:34:17.904307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.270 [2024-07-24 09:34:17.953857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.837 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.097 1+0 records in 00:08:41.097 1+0 records out 00:08:41.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381911 s, 10.7 MB/s 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.097 09:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:41.665 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:41.665 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:41.665 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:41.665 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.666 1+0 records in 00:08:41.666 1+0 records out 00:08:41.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729719 s, 5.6 MB/s 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.666 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.925 1+0 records in 00:08:41.925 1+0 records out 00:08:41.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125459 s, 3.3 MB/s 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.925 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.184 1+0 records in 00:08:42.184 1+0 records out 00:08:42.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718998 s, 5.7 MB/s 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.184 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.185 09:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.443 1+0 records in 00:08:42.443 1+0 records out 00:08:42.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005854 s, 7.0 MB/s 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.443 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.703 1+0 records in 00:08:42.703 1+0 records out 00:08:42.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670433 s, 6.1 MB/s 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.703 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.962 1+0 records in 00:08:42.962 1+0 records out 00:08:42.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788921 s, 5.2 MB/s 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.962 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd0", 00:08:43.221 "bdev_name": "Nvme0n1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd1", 00:08:43.221 "bdev_name": "Nvme1n1p1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd2", 00:08:43.221 "bdev_name": "Nvme1n1p2" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd3", 00:08:43.221 "bdev_name": "Nvme2n1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd4", 00:08:43.221 "bdev_name": "Nvme2n2" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd5", 00:08:43.221 "bdev_name": "Nvme2n3" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd6", 00:08:43.221 "bdev_name": "Nvme3n1" 00:08:43.221 } 00:08:43.221 ]' 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd0", 00:08:43.221 "bdev_name": "Nvme0n1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd1", 00:08:43.221 "bdev_name": "Nvme1n1p1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd2", 00:08:43.221 "bdev_name": "Nvme1n1p2" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd3", 00:08:43.221 "bdev_name": "Nvme2n1" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd4", 00:08:43.221 "bdev_name": "Nvme2n2" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd5", 00:08:43.221 "bdev_name": "Nvme2n3" 00:08:43.221 }, 00:08:43.221 { 00:08:43.221 "nbd_device": "/dev/nbd6", 00:08:43.221 "bdev_name": "Nvme3n1" 00:08:43.221 } 00:08:43.221 ]' 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.221 09:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.480 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.737 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.738 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.996 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.255 09:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.255 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.513 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:44.771 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.772 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.029 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:45.288 /dev/nbd0 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.288 1+0 records in 00:08:45.288 1+0 records out 00:08:45.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054481 s, 7.5 MB/s 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.288 09:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.288 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.288 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.288 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.288 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.288 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:45.548 /dev/nbd1 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.548 1+0 records in 00:08:45.548 1+0 records out 00:08:45.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433902 s, 9.4 MB/s 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.548 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:45.807 /dev/nbd10 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.807 1+0 records in 00:08:45.807 1+0 records out 00:08:45.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493653 s, 8.3 MB/s 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.807 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:46.066 /dev/nbd11 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.066 1+0 records in 00:08:46.066 1+0 records out 00:08:46.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00235567 s, 1.7 MB/s 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.066 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:46.325 /dev/nbd12 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.325 1+0 records in 00:08:46.325 1+0 records out 00:08:46.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673005 s, 6.1 MB/s 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.325 09:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:46.583 /dev/nbd13 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.583 1+0 records in 00:08:46.583 1+0 records out 00:08:46.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690727 s, 5.9 MB/s 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.583 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:46.840 /dev/nbd14 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.840 1+0 records in 00:08:46.840 1+0 records out 00:08:46.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00180694 s, 2.3 MB/s 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.840 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd0", 00:08:47.098 "bdev_name": "Nvme0n1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd1", 00:08:47.098 "bdev_name": "Nvme1n1p1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd10", 00:08:47.098 "bdev_name": "Nvme1n1p2" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd11", 00:08:47.098 "bdev_name": "Nvme2n1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd12", 00:08:47.098 "bdev_name": "Nvme2n2" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd13", 00:08:47.098 "bdev_name": "Nvme2n3" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd14", 00:08:47.098 "bdev_name": "Nvme3n1" 00:08:47.098 } 00:08:47.098 ]' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd0", 00:08:47.098 "bdev_name": "Nvme0n1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd1", 00:08:47.098 "bdev_name": "Nvme1n1p1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd10", 00:08:47.098 "bdev_name": "Nvme1n1p2" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd11", 00:08:47.098 "bdev_name": "Nvme2n1" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd12", 00:08:47.098 "bdev_name": "Nvme2n2" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd13", 00:08:47.098 "bdev_name": "Nvme2n3" 00:08:47.098 }, 00:08:47.098 { 00:08:47.098 "nbd_device": "/dev/nbd14", 00:08:47.098 "bdev_name": "Nvme3n1" 00:08:47.098 } 00:08:47.098 ]' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:47.098 /dev/nbd1 00:08:47.098 /dev/nbd10 00:08:47.098 /dev/nbd11 00:08:47.098 /dev/nbd12 00:08:47.098 /dev/nbd13 00:08:47.098 /dev/nbd14' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:47.098 /dev/nbd1 00:08:47.098 /dev/nbd10 00:08:47.098 /dev/nbd11 00:08:47.098 /dev/nbd12 00:08:47.098 /dev/nbd13 00:08:47.098 /dev/nbd14' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:47.098 256+0 records in 00:08:47.098 256+0 records out 00:08:47.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011877 s, 88.3 MB/s 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.098 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:47.356 256+0 records in 00:08:47.356 256+0 records out 00:08:47.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122839 s, 8.5 MB/s 00:08:47.356 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.356 09:34:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:47.356 256+0 records in 00:08:47.356 256+0 records out 00:08:47.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126253 s, 8.3 MB/s 00:08:47.356 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.356 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:47.615 256+0 records in 00:08:47.615 256+0 records out 00:08:47.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136797 s, 7.7 MB/s 00:08:47.615 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.615 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:47.615 256+0 records in 00:08:47.615 256+0 records out 00:08:47.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137145 s, 7.6 MB/s 00:08:47.615 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.615 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:47.875 256+0 records in 00:08:47.875 256+0 records out 00:08:47.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135232 s, 7.8 MB/s 00:08:47.875 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.875 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:47.875 256+0 records in 00:08:47.875 256+0 records out 00:08:47.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135793 s, 7.7 MB/s 00:08:47.875 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.875 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:48.133 256+0 records in 00:08:48.133 256+0 records out 00:08:48.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135225 s, 7.8 MB/s 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.133 09:34:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.392 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.650 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.909 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.168 09:34:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.427 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.685 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:49.944 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:49.944 malloc_lvol_verify 00:08:50.203 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:50.203 697fdc34-0350-42f5-98f8-a5bff8b6483c 00:08:50.203 09:34:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:50.462 543995d8-da59-42bf-aaeb-0fa063b05bd6 00:08:50.462 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:50.720 /dev/nbd0 00:08:50.720 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:50.720 mke2fs 1.46.5 (30-Dec-2021) 00:08:50.720 Discarding device blocks: 0/4096 done 00:08:50.720 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:50.720 00:08:50.720 Allocating group tables: 0/1 done 00:08:50.720 Writing inode tables: 0/1 done 00:08:50.720 Creating journal (1024 blocks): done 00:08:50.720 Writing superblocks and filesystem accounting information: 0/1 done 00:08:50.720 00:08:50.720 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:50.720 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.721 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 78312 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 78312 ']' 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 78312 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78312 00:08:50.979 killing process with pid 78312 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78312' 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 78312 00:08:50.979 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 78312 00:08:51.237 09:34:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:51.237 00:08:51.237 real 0m11.243s 00:08:51.237 user 0m15.034s 00:08:51.237 sys 0m5.059s 00:08:51.237 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.237 ************************************ 00:08:51.237 END TEST bdev_nbd 00:08:51.237 ************************************ 00:08:51.237 09:34:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:51.237 skipping fio tests on NVMe due to multi-ns failures. 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:51.237 09:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:51.237 09:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:51.237 09:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.237 09:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 ************************************ 00:08:51.237 START TEST bdev_verify 00:08:51.237 ************************************ 00:08:51.237 09:34:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:51.237 [2024-07-24 09:34:29.052185] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:51.237 [2024-07-24 09:34:29.052357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78726 ] 00:08:51.496 [2024-07-24 09:34:29.225663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.496 [2024-07-24 09:34:29.274341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.496 [2024-07-24 09:34:29.274442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.063 Running I/O for 5 seconds... 00:08:57.333 00:08:57.334 Latency(us) 00:08:57.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0xbd0bd 00:08:57.334 Nvme0n1 : 5.09 1584.79 6.19 0.00 0.00 80625.55 16318.20 80432.94 00:08:57.334 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:57.334 Nvme0n1 : 5.10 1531.51 5.98 0.00 0.00 82683.25 17476.27 91381.92 00:08:57.334 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x4ff80 00:08:57.334 Nvme1n1p1 : 5.09 1583.66 6.19 0.00 0.00 80566.21 17686.82 78327.36 00:08:57.334 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:57.334 Nvme1n1p1 : 5.10 1530.51 5.98 0.00 0.00 82612.19 15054.86 88013.01 00:08:57.334 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x4ff7f 00:08:57.334 Nvme1n1p2 : 5.10 1582.61 6.18 0.00 0.00 80483.10 18529.05 78748.48 00:08:57.334 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:57.334 Nvme1n1p2 : 5.09 1535.38 6.00 0.00 0.00 83213.54 16002.36 86328.55 00:08:57.334 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x80000 00:08:57.334 Nvme2n1 : 5.10 1581.68 6.18 0.00 0.00 80403.72 19687.12 81275.17 00:08:57.334 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x80000 length 0x80000 00:08:57.334 Nvme2n1 : 5.09 1534.90 6.00 0.00 0.00 83092.80 15791.81 83801.86 00:08:57.334 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x80000 00:08:57.334 Nvme2n2 : 5.10 1580.69 6.17 0.00 0.00 80313.61 20634.63 82117.40 00:08:57.334 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x80000 length 0x80000 00:08:57.334 Nvme2n2 : 5.09 1534.48 5.99 0.00 0.00 82959.43 15265.41 79590.71 00:08:57.334 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x80000 00:08:57.334 Nvme2n3 : 5.10 1579.69 6.17 0.00 0.00 80225.83 18423.78 85065.20 00:08:57.334 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x80000 length 0x80000 00:08:57.334 Nvme2n3 : 5.09 1533.41 5.99 0.00 0.00 82872.69 17055.15 80432.94 00:08:57.334 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x0 length 0x20000 00:08:57.334 Nvme3n1 : 5.11 1579.33 6.17 0.00 0.00 80124.28 15581.25 84222.97 00:08:57.334 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:57.334 Verification LBA range: start 0x20000 length 0x20000 00:08:57.334 Nvme3n1 : 5.10 1532.42 5.99 0.00 0.00 82774.31 18844.89 88434.12 00:08:57.334 =================================================================================================================== 00:08:57.334 Total : 21805.04 85.18 0.00 0.00 81619.20 15054.86 91381.92 00:08:57.593 00:08:57.593 real 0m6.414s 00:08:57.593 user 0m11.931s 00:08:57.593 sys 0m0.284s 00:08:57.593 09:34:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.593 09:34:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:57.593 ************************************ 00:08:57.593 END TEST bdev_verify 00:08:57.593 ************************************ 00:08:57.852 09:34:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:57.852 09:34:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:57.852 09:34:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.852 09:34:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:57.852 ************************************ 00:08:57.852 START TEST bdev_verify_big_io 00:08:57.852 ************************************ 00:08:57.852 09:34:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:57.852 [2024-07-24 09:34:35.533412] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:08:57.852 [2024-07-24 09:34:35.533594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78813 ] 00:08:58.120 [2024-07-24 09:34:35.702810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.120 [2024-07-24 09:34:35.751033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.120 [2024-07-24 09:34:35.751102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.688 Running I/O for 5 seconds... 00:09:05.251 00:09:05.251 Latency(us) 00:09:05.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.251 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0xbd0b 00:09:05.251 Nvme0n1 : 5.69 126.36 7.90 0.00 0.00 974438.99 30530.83 1091529.72 00:09:05.251 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:05.251 Nvme0n1 : 5.84 98.58 6.16 0.00 0.00 1238609.29 22529.64 1657508.09 00:09:05.251 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x4ff8 00:09:05.251 Nvme1n1p1 : 5.77 134.24 8.39 0.00 0.00 902729.34 61903.88 929821.61 00:09:05.251 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:05.251 Nvme1n1p1 : 5.74 144.22 9.01 0.00 0.00 833414.79 83801.86 822016.21 00:09:05.251 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x4ff7 00:09:05.251 Nvme1n1p2 : 5.77 132.64 8.29 0.00 0.00 888052.52 62325.00 943297.29 00:09:05.251 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:05.251 Nvme1n1p2 : 5.69 146.34 9.15 0.00 0.00 809837.03 88855.24 734424.31 00:09:05.251 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x8000 00:09:05.251 Nvme2n1 : 5.70 134.85 8.43 0.00 0.00 863410.75 61482.77 835491.88 00:09:05.251 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x8000 length 0x8000 00:09:05.251 Nvme2n1 : 5.77 150.80 9.42 0.00 0.00 768466.37 55587.16 741162.15 00:09:05.251 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x8000 00:09:05.251 Nvme2n2 : 5.77 138.11 8.63 0.00 0.00 820641.90 63167.23 848967.56 00:09:05.251 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x8000 length 0x8000 00:09:05.251 Nvme2n2 : 5.77 146.41 9.15 0.00 0.00 775689.15 26003.84 1556440.52 00:09:05.251 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x8000 00:09:05.251 Nvme2n3 : 5.85 148.79 9.30 0.00 0.00 748188.84 31373.06 869181.07 00:09:05.251 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x8000 length 0x8000 00:09:05.251 Nvme2n3 : 5.81 151.44 9.47 0.00 0.00 730840.65 33689.19 1583391.87 00:09:05.251 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x0 length 0x2000 00:09:05.251 Nvme3n1 : 5.85 157.68 9.86 0.00 0.00 690416.81 2697.77 976986.47 00:09:05.251 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.251 Verification LBA range: start 0x2000 length 0x2000 00:09:05.251 Nvme3n1 : 5.87 172.35 10.77 0.00 0.00 628757.37 1868.70 1610343.22 00:09:05.251 =================================================================================================================== 00:09:05.251 Total : 1982.80 123.93 0.00 0.00 817258.84 1868.70 1657508.09 00:09:05.251 00:09:05.251 real 0m7.376s 00:09:05.251 user 0m13.806s 00:09:05.251 sys 0m0.312s 00:09:05.251 09:34:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.251 09:34:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:05.251 ************************************ 00:09:05.251 END TEST bdev_verify_big_io 00:09:05.251 ************************************ 00:09:05.251 09:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.251 09:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:05.251 09:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.251 09:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:05.251 ************************************ 00:09:05.251 START TEST bdev_write_zeroes 00:09:05.251 ************************************ 00:09:05.251 09:34:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.251 [2024-07-24 09:34:42.988567] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:09:05.251 [2024-07-24 09:34:42.988768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78917 ] 00:09:05.509 [2024-07-24 09:34:43.153524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.509 [2024-07-24 09:34:43.208695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.073 Running I/O for 1 seconds... 00:09:07.972 00:09:07.972 Latency(us) 00:09:07.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.972 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme0n1 : 1.71 2590.69 10.12 0.00 0.00 42234.61 6685.20 970248.64 00:09:07.972 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme1n1p1 : 1.14 4302.04 16.80 0.00 0.00 29642.89 9896.20 256037.83 00:09:07.972 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme1n1p2 : 1.15 4261.07 16.64 0.00 0.00 29859.44 9738.28 256037.83 00:09:07.972 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme2n1 : 1.15 4294.74 16.78 0.00 0.00 29545.67 9738.28 256037.83 00:09:07.972 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme2n2 : 1.14 4316.84 16.86 0.00 0.00 28723.78 9738.28 254353.38 00:09:07.972 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.972 Nvme2n3 : 1.14 4312.29 16.84 0.00 0.00 28663.44 9685.64 254353.38 00:09:07.972 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.973 Nvme3n1 : 1.14 4364.64 17.05 0.00 0.00 28254.35 9633.00 254353.38 00:09:07.973 =================================================================================================================== 00:09:07.973 Total : 28442.32 111.10 0.00 0.00 30819.62 6685.20 970248.64 00:09:07.973 00:09:07.973 real 0m2.876s 00:09:07.973 user 0m2.486s 00:09:07.973 sys 0m0.277s 00:09:07.973 09:34:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.973 09:34:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 ************************************ 00:09:07.973 END TEST bdev_write_zeroes 00:09:07.973 ************************************ 00:09:08.230 09:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.230 09:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:08.230 09:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.230 09:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.230 ************************************ 00:09:08.230 START TEST bdev_json_nonenclosed 00:09:08.230 ************************************ 00:09:08.230 09:34:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.230 [2024-07-24 09:34:45.932370] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:09:08.230 [2024-07-24 09:34:45.932507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:09:08.488 [2024-07-24 09:34:46.100075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.488 [2024-07-24 09:34:46.144314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.488 [2024-07-24 09:34:46.144430] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:08.488 [2024-07-24 09:34:46.144475] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:08.488 [2024-07-24 09:34:46.144504] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.488 00:09:08.488 real 0m0.419s 00:09:08.488 user 0m0.171s 00:09:08.488 sys 0m0.144s 00:09:08.488 09:34:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.488 09:34:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:08.488 ************************************ 00:09:08.488 END TEST bdev_json_nonenclosed 00:09:08.488 ************************************ 00:09:08.746 09:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.746 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:08.746 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.746 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.746 ************************************ 00:09:08.746 START TEST bdev_json_nonarray 00:09:08.746 ************************************ 00:09:08.746 09:34:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.746 [2024-07-24 09:34:46.422232] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:09:08.746 [2024-07-24 09:34:46.422383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78991 ] 00:09:09.004 [2024-07-24 09:34:46.596803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.004 [2024-07-24 09:34:46.646440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.004 [2024-07-24 09:34:46.646569] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:09.004 [2024-07-24 09:34:46.646615] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:09.004 [2024-07-24 09:34:46.646635] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.004 00:09:09.004 real 0m0.442s 00:09:09.004 user 0m0.185s 00:09:09.004 sys 0m0.152s 00:09:09.004 09:34:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.004 09:34:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:09.004 ************************************ 00:09:09.004 END TEST bdev_json_nonarray 00:09:09.004 ************************************ 00:09:09.262 09:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:09.262 09:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:09.262 09:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:09.262 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.262 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.262 09:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.262 ************************************ 00:09:09.262 START TEST bdev_gpt_uuid 00:09:09.262 ************************************ 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79017 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79017 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 79017 ']' 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.262 09:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.262 [2024-07-24 09:34:46.951968] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:09:09.262 [2024-07-24 09:34:46.952122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79017 ] 00:09:09.521 [2024-07-24 09:34:47.126578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.521 [2024-07-24 09:34:47.171984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.088 09:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.088 09:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:09:10.088 09:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:10.088 09:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.088 09:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.348 Some configs were skipped because the RPC state that can call them passed over. 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:10.348 { 00:09:10.348 "name": "Nvme1n1p1", 00:09:10.348 "aliases": [ 00:09:10.348 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:10.348 ], 00:09:10.348 "product_name": "GPT Disk", 00:09:10.348 "block_size": 4096, 00:09:10.348 "num_blocks": 655104, 00:09:10.348 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:10.348 "assigned_rate_limits": { 00:09:10.348 "rw_ios_per_sec": 0, 00:09:10.348 "rw_mbytes_per_sec": 0, 00:09:10.348 "r_mbytes_per_sec": 0, 00:09:10.348 "w_mbytes_per_sec": 0 00:09:10.348 }, 00:09:10.348 "claimed": false, 00:09:10.348 "zoned": false, 00:09:10.348 "supported_io_types": { 00:09:10.348 "read": true, 00:09:10.348 "write": true, 00:09:10.348 "unmap": true, 00:09:10.348 "flush": true, 00:09:10.348 "reset": true, 00:09:10.348 "nvme_admin": false, 00:09:10.348 "nvme_io": false, 00:09:10.348 "nvme_io_md": false, 00:09:10.348 "write_zeroes": true, 00:09:10.348 "zcopy": false, 00:09:10.348 "get_zone_info": false, 00:09:10.348 "zone_management": false, 00:09:10.348 "zone_append": false, 00:09:10.348 "compare": true, 00:09:10.348 "compare_and_write": false, 00:09:10.348 "abort": true, 00:09:10.348 "seek_hole": false, 00:09:10.348 "seek_data": false, 00:09:10.348 "copy": true, 00:09:10.348 "nvme_iov_md": false 00:09:10.348 }, 00:09:10.348 "driver_specific": { 00:09:10.348 "gpt": { 00:09:10.348 "base_bdev": "Nvme1n1", 00:09:10.348 "offset_blocks": 256, 00:09:10.348 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:10.348 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:10.348 "partition_name": "SPDK_TEST_first" 00:09:10.348 } 00:09:10.348 } 00:09:10.348 } 00:09:10.348 ]' 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:10.348 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:10.607 { 00:09:10.607 "name": "Nvme1n1p2", 00:09:10.607 "aliases": [ 00:09:10.607 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:10.607 ], 00:09:10.607 "product_name": "GPT Disk", 00:09:10.607 "block_size": 4096, 00:09:10.607 "num_blocks": 655103, 00:09:10.607 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:10.607 "assigned_rate_limits": { 00:09:10.607 "rw_ios_per_sec": 0, 00:09:10.607 "rw_mbytes_per_sec": 0, 00:09:10.607 "r_mbytes_per_sec": 0, 00:09:10.607 "w_mbytes_per_sec": 0 00:09:10.607 }, 00:09:10.607 "claimed": false, 00:09:10.607 "zoned": false, 00:09:10.607 "supported_io_types": { 00:09:10.607 "read": true, 00:09:10.607 "write": true, 00:09:10.607 "unmap": true, 00:09:10.607 "flush": true, 00:09:10.607 "reset": true, 00:09:10.607 "nvme_admin": false, 00:09:10.607 "nvme_io": false, 00:09:10.607 "nvme_io_md": false, 00:09:10.607 "write_zeroes": true, 00:09:10.607 "zcopy": false, 00:09:10.607 "get_zone_info": false, 00:09:10.607 "zone_management": false, 00:09:10.607 "zone_append": false, 00:09:10.607 "compare": true, 00:09:10.607 "compare_and_write": false, 00:09:10.607 "abort": true, 00:09:10.607 "seek_hole": false, 00:09:10.607 "seek_data": false, 00:09:10.607 "copy": true, 00:09:10.607 "nvme_iov_md": false 00:09:10.607 }, 00:09:10.607 "driver_specific": { 00:09:10.607 "gpt": { 00:09:10.607 "base_bdev": "Nvme1n1", 00:09:10.607 "offset_blocks": 655360, 00:09:10.607 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:10.607 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:10.607 "partition_name": "SPDK_TEST_second" 00:09:10.607 } 00:09:10.607 } 00:09:10.607 } 00:09:10.607 ]' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 79017 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 79017 ']' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 79017 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79017 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.607 killing process with pid 79017 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79017' 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 79017 00:09:10.607 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 79017 00:09:11.175 00:09:11.175 real 0m1.957s 00:09:11.175 user 0m2.041s 00:09:11.175 sys 0m0.505s 00:09:11.175 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.175 09:34:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.175 ************************************ 00:09:11.175 END TEST bdev_gpt_uuid 00:09:11.175 ************************************ 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:11.175 09:34:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:11.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.002 Waiting for block devices as requested 00:09:12.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.261 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.261 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.530 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:17.530 09:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:17.530 09:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:17.789 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:17.789 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:17.789 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:17.789 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:17.789 09:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:17.789 00:09:17.789 real 0m52.451s 00:09:17.789 user 1m3.543s 00:09:17.789 sys 0m11.516s 00:09:17.789 09:34:55 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.789 09:34:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.789 ************************************ 00:09:17.789 END TEST blockdev_nvme_gpt 00:09:17.789 ************************************ 00:09:17.789 09:34:55 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:17.789 09:34:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.789 09:34:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.789 09:34:55 -- common/autotest_common.sh@10 -- # set +x 00:09:17.789 ************************************ 00:09:17.789 START TEST nvme 00:09:17.789 ************************************ 00:09:17.789 09:34:55 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:17.789 * Looking for test storage... 00:09:17.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.789 09:34:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.290 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.290 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.290 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.290 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.548 09:34:57 nvme -- nvme/nvme.sh@79 -- # uname 00:09:19.548 09:34:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:19.548 09:34:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:19.548 09:34:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1071 -- # stubpid=79646 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:19.548 Waiting for stub to ready for secondary processes... 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/79646 ]] 00:09:19.548 09:34:57 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:19.548 [2024-07-24 09:34:57.211964] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:09:19.548 [2024-07-24 09:34:57.212099] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:20.481 09:34:58 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:20.481 09:34:58 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/79646 ]] 00:09:20.481 09:34:58 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:20.481 [2024-07-24 09:34:58.213404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.481 [2024-07-24 09:34:58.241381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.481 [2024-07-24 09:34:58.241491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.481 [2024-07-24 09:34:58.241594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.481 [2024-07-24 09:34:58.253898] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:20.481 [2024-07-24 09:34:58.253955] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.481 [2024-07-24 09:34:58.267001] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:20.481 [2024-07-24 09:34:58.267239] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:20.481 [2024-07-24 09:34:58.268077] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.481 [2024-07-24 09:34:58.268272] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:20.481 [2024-07-24 09:34:58.268339] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:20.481 [2024-07-24 09:34:58.269121] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.481 [2024-07-24 09:34:58.269466] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:20.481 [2024-07-24 09:34:58.269542] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:20.481 [2024-07-24 09:34:58.270494] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.481 [2024-07-24 09:34:58.270712] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:20.481 [2024-07-24 09:34:58.270773] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:20.481 [2024-07-24 09:34:58.270851] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:20.481 [2024-07-24 09:34:58.270930] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:21.414 done. 00:09:21.414 09:34:59 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:21.414 09:34:59 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:09:21.415 09:34:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:21.415 09:34:59 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:09:21.415 09:34:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.415 09:34:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.415 ************************************ 00:09:21.415 START TEST nvme_reset 00:09:21.415 ************************************ 00:09:21.415 09:34:59 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:21.672 Initializing NVMe Controllers 00:09:21.672 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:21.672 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:21.672 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:21.672 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:21.672 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:21.672 00:09:21.672 real 0m0.255s 00:09:21.672 user 0m0.096s 00:09:21.672 sys 0m0.116s 00:09:21.672 09:34:59 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.672 09:34:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:21.672 ************************************ 00:09:21.672 END TEST nvme_reset 00:09:21.672 ************************************ 00:09:21.930 09:34:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:21.931 09:34:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.931 09:34:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.931 09:34:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.931 ************************************ 00:09:21.931 START TEST nvme_identify 00:09:21.931 ************************************ 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:09:21.931 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:21.931 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:21.931 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.931 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:21.931 09:34:59 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.931 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:22.192 [2024-07-24 09:34:59.846576] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 79679 terminated unexpected 00:09:22.192 ===================================================== 00:09:22.192 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.192 ===================================================== 00:09:22.192 Controller Capabilities/Features 00:09:22.192 ================================ 00:09:22.192 Vendor ID: 1b36 00:09:22.192 Subsystem Vendor ID: 1af4 00:09:22.192 Serial Number: 12340 00:09:22.192 Model Number: QEMU NVMe Ctrl 00:09:22.192 Firmware Version: 8.0.0 00:09:22.192 Recommended Arb Burst: 6 00:09:22.192 IEEE OUI Identifier: 00 54 52 00:09:22.192 Multi-path I/O 00:09:22.192 May have multiple subsystem ports: No 00:09:22.192 May have multiple controllers: No 00:09:22.192 Associated with SR-IOV VF: No 00:09:22.192 Max Data Transfer Size: 524288 00:09:22.192 Max Number of Namespaces: 256 00:09:22.192 Max Number of I/O Queues: 64 00:09:22.192 NVMe Specification Version (VS): 1.4 00:09:22.192 NVMe Specification Version (Identify): 1.4 00:09:22.192 Maximum Queue Entries: 2048 00:09:22.192 Contiguous Queues Required: Yes 00:09:22.192 Arbitration Mechanisms Supported 00:09:22.192 Weighted Round Robin: Not Supported 00:09:22.192 Vendor Specific: Not Supported 00:09:22.192 Reset Timeout: 7500 ms 00:09:22.192 Doorbell Stride: 4 bytes 00:09:22.192 NVM Subsystem Reset: Not Supported 00:09:22.192 Command Sets Supported 00:09:22.192 NVM Command Set: Supported 00:09:22.192 Boot Partition: Not Supported 00:09:22.192 Memory Page Size Minimum: 4096 bytes 00:09:22.192 Memory Page Size Maximum: 65536 bytes 00:09:22.192 Persistent Memory Region: Not Supported 00:09:22.192 Optional Asynchronous Events Supported 00:09:22.192 Namespace Attribute Notices: Supported 00:09:22.192 Firmware Activation Notices: Not Supported 00:09:22.192 ANA Change Notices: Not Supported 00:09:22.192 PLE Aggregate Log Change Notices: Not Supported 00:09:22.192 LBA Status Info Alert Notices: Not Supported 00:09:22.192 EGE Aggregate Log Change Notices: Not Supported 00:09:22.192 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.192 Zone Descriptor Change Notices: Not Supported 00:09:22.192 Discovery Log Change Notices: Not Supported 00:09:22.192 Controller Attributes 00:09:22.192 128-bit Host Identifier: Not Supported 00:09:22.192 Non-Operational Permissive Mode: Not Supported 00:09:22.192 NVM Sets: Not Supported 00:09:22.192 Read Recovery Levels: Not Supported 00:09:22.192 Endurance Groups: Not Supported 00:09:22.192 Predictable Latency Mode: Not Supported 00:09:22.192 Traffic Based Keep ALive: Not Supported 00:09:22.192 Namespace Granularity: Not Supported 00:09:22.192 SQ Associations: Not Supported 00:09:22.192 UUID List: Not Supported 00:09:22.192 Multi-Domain Subsystem: Not Supported 00:09:22.192 Fixed Capacity Management: Not Supported 00:09:22.192 Variable Capacity Management: Not Supported 00:09:22.192 Delete Endurance Group: Not Supported 00:09:22.192 Delete NVM Set: Not Supported 00:09:22.192 Extended LBA Formats Supported: Supported 00:09:22.192 Flexible Data Placement Supported: Not Supported 00:09:22.192 00:09:22.192 Controller Memory Buffer Support 00:09:22.192 ================================ 00:09:22.192 Supported: No 00:09:22.192 00:09:22.192 Persistent Memory Region Support 00:09:22.192 ================================ 00:09:22.192 Supported: No 00:09:22.192 00:09:22.192 Admin Command Set Attributes 00:09:22.192 ============================ 00:09:22.192 Security Send/Receive: Not Supported 00:09:22.192 Format NVM: Supported 00:09:22.192 Firmware Activate/Download: Not Supported 00:09:22.192 Namespace Management: Supported 00:09:22.192 Device Self-Test: Not Supported 00:09:22.192 Directives: Supported 00:09:22.192 NVMe-MI: Not Supported 00:09:22.192 Virtualization Management: Not Supported 00:09:22.192 Doorbell Buffer Config: Supported 00:09:22.192 Get LBA Status Capability: Not Supported 00:09:22.192 Command & Feature Lockdown Capability: Not Supported 00:09:22.192 Abort Command Limit: 4 00:09:22.192 Async Event Request Limit: 4 00:09:22.192 Number of Firmware Slots: N/A 00:09:22.192 Firmware Slot 1 Read-Only: N/A 00:09:22.192 Firmware Activation Without Reset: N/A 00:09:22.192 Multiple Update Detection Support: N/A 00:09:22.192 Firmware Update Granularity: No Information Provided 00:09:22.192 Per-Namespace SMART Log: Yes 00:09:22.192 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.192 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:22.192 Command Effects Log Page: Supported 00:09:22.192 Get Log Page Extended Data: Supported 00:09:22.192 Telemetry Log Pages: Not Supported 00:09:22.192 Persistent Event Log Pages: Not Supported 00:09:22.192 Supported Log Pages Log Page: May Support 00:09:22.192 Commands Supported & Effects Log Page: Not Supported 00:09:22.192 Feature Identifiers & Effects Log Page:May Support 00:09:22.192 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.192 Data Area 4 for Telemetry Log: Not Supported 00:09:22.192 Error Log Page Entries Supported: 1 00:09:22.192 Keep Alive: Not Supported 00:09:22.192 00:09:22.192 NVM Command Set Attributes 00:09:22.192 ========================== 00:09:22.192 Submission Queue Entry Size 00:09:22.192 Max: 64 00:09:22.192 Min: 64 00:09:22.192 Completion Queue Entry Size 00:09:22.192 Max: 16 00:09:22.192 Min: 16 00:09:22.192 Number of Namespaces: 256 00:09:22.192 Compare Command: Supported 00:09:22.192 Write Uncorrectable Command: Not Supported 00:09:22.192 Dataset Management Command: Supported 00:09:22.192 Write Zeroes Command: Supported 00:09:22.192 Set Features Save Field: Supported 00:09:22.192 Reservations: Not Supported 00:09:22.192 Timestamp: Supported 00:09:22.192 Copy: Supported 00:09:22.192 Volatile Write Cache: Present 00:09:22.192 Atomic Write Unit (Normal): 1 00:09:22.192 Atomic Write Unit (PFail): 1 00:09:22.192 Atomic Compare & Write Unit: 1 00:09:22.192 Fused Compare & Write: Not Supported 00:09:22.192 Scatter-Gather List 00:09:22.192 SGL Command Set: Supported 00:09:22.192 SGL Keyed: Not Supported 00:09:22.192 SGL Bit Bucket Descriptor: Not Supported 00:09:22.192 SGL Metadata Pointer: Not Supported 00:09:22.192 Oversized SGL: Not Supported 00:09:22.192 SGL Metadata Address: Not Supported 00:09:22.192 SGL Offset: Not Supported 00:09:22.192 Transport SGL Data Block: Not Supported 00:09:22.192 Replay Protected Memory Block: Not Supported 00:09:22.192 00:09:22.192 Firmware Slot Information 00:09:22.192 ========================= 00:09:22.192 Active slot: 1 00:09:22.192 Slot 1 Firmware Revision: 1.0 00:09:22.192 00:09:22.192 00:09:22.192 Commands Supported and Effects 00:09:22.192 ============================== 00:09:22.192 Admin Commands 00:09:22.192 -------------- 00:09:22.192 Delete I/O Submission Queue (00h): Supported 00:09:22.192 Create I/O Submission Queue (01h): Supported 00:09:22.192 Get Log Page (02h): Supported 00:09:22.192 Delete I/O Completion Queue (04h): Supported 00:09:22.192 Create I/O Completion Queue (05h): Supported 00:09:22.192 Identify (06h): Supported 00:09:22.192 Abort (08h): Supported 00:09:22.192 Set Features (09h): Supported 00:09:22.192 Get Features (0Ah): Supported 00:09:22.193 Asynchronous Event Request (0Ch): Supported 00:09:22.193 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.193 Directive Send (19h): Supported 00:09:22.193 Directive Receive (1Ah): Supported 00:09:22.193 Virtualization Management (1Ch): Supported 00:09:22.193 Doorbell Buffer Config (7Ch): Supported 00:09:22.193 Format NVM (80h): Supported LBA-Change 00:09:22.193 I/O Commands 00:09:22.193 ------------ 00:09:22.193 Flush (00h): Supported LBA-Change 00:09:22.193 Write (01h): Supported LBA-Change 00:09:22.193 Read (02h): Supported 00:09:22.193 Compare (05h): Supported 00:09:22.193 Write Zeroes (08h): Supported LBA-Change 00:09:22.193 Dataset Management (09h): Supported LBA-Change 00:09:22.193 Unknown (0Ch): Supported 00:09:22.193 Unknown (12h): Supported 00:09:22.193 Copy (19h): Supported LBA-Change 00:09:22.193 Unknown (1Dh): Supported LBA-Change 00:09:22.193 00:09:22.193 Error Log 00:09:22.193 ========= 00:09:22.193 00:09:22.193 Arbitration 00:09:22.193 =========== 00:09:22.193 Arbitration Burst: no limit 00:09:22.193 00:09:22.193 Power Management 00:09:22.193 ================ 00:09:22.193 Number of Power States: 1 00:09:22.193 Current Power State: Power State #0 00:09:22.193 Power State #0: 00:09:22.193 Max Power: 25.00 W 00:09:22.193 Non-Operational State: Operational 00:09:22.193 Entry Latency: 16 microseconds 00:09:22.193 Exit Latency: 4 microseconds 00:09:22.193 Relative Read Throughput: 0 00:09:22.193 Relative Read Latency: 0 00:09:22.193 Relative Write Throughput: 0 00:09:22.193 Relative Write Latency: 0 00:09:22.193 Idle Power[2024-07-24 09:34:59.847747] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 79679 terminated unexpected 00:09:22.193 : Not Reported 00:09:22.193 Active Power: Not Reported 00:09:22.193 Non-Operational Permissive Mode: Not Supported 00:09:22.193 00:09:22.193 Health Information 00:09:22.193 ================== 00:09:22.193 Critical Warnings: 00:09:22.193 Available Spare Space: OK 00:09:22.193 Temperature: OK 00:09:22.193 Device Reliability: OK 00:09:22.193 Read Only: No 00:09:22.193 Volatile Memory Backup: OK 00:09:22.193 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.193 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.193 Available Spare: 0% 00:09:22.193 Available Spare Threshold: 0% 00:09:22.193 Life Percentage Used: 0% 00:09:22.193 Data Units Read: 789 00:09:22.193 Data Units Written: 681 00:09:22.193 Host Read Commands: 39407 00:09:22.193 Host Write Commands: 38445 00:09:22.193 Controller Busy Time: 0 minutes 00:09:22.193 Power Cycles: 0 00:09:22.193 Power On Hours: 0 hours 00:09:22.193 Unsafe Shutdowns: 0 00:09:22.193 Unrecoverable Media Errors: 0 00:09:22.193 Lifetime Error Log Entries: 0 00:09:22.193 Warning Temperature Time: 0 minutes 00:09:22.193 Critical Temperature Time: 0 minutes 00:09:22.193 00:09:22.193 Number of Queues 00:09:22.193 ================ 00:09:22.193 Number of I/O Submission Queues: 64 00:09:22.193 Number of I/O Completion Queues: 64 00:09:22.193 00:09:22.193 ZNS Specific Controller Data 00:09:22.193 ============================ 00:09:22.193 Zone Append Size Limit: 0 00:09:22.193 00:09:22.193 00:09:22.193 Active Namespaces 00:09:22.193 ================= 00:09:22.193 Namespace ID:1 00:09:22.193 Error Recovery Timeout: Unlimited 00:09:22.193 Command Set Identifier: NVM (00h) 00:09:22.193 Deallocate: Supported 00:09:22.193 Deallocated/Unwritten Error: Supported 00:09:22.193 Deallocated Read Value: All 0x00 00:09:22.193 Deallocate in Write Zeroes: Not Supported 00:09:22.193 Deallocated Guard Field: 0xFFFF 00:09:22.193 Flush: Supported 00:09:22.193 Reservation: Not Supported 00:09:22.193 Metadata Transferred as: Separate Metadata Buffer 00:09:22.193 Namespace Sharing Capabilities: Private 00:09:22.193 Size (in LBAs): 1548666 (5GiB) 00:09:22.193 Capacity (in LBAs): 1548666 (5GiB) 00:09:22.193 Utilization (in LBAs): 1548666 (5GiB) 00:09:22.193 Thin Provisioning: Not Supported 00:09:22.193 Per-NS Atomic Units: No 00:09:22.193 Maximum Single Source Range Length: 128 00:09:22.193 Maximum Copy Length: 128 00:09:22.193 Maximum Source Range Count: 128 00:09:22.193 NGUID/EUI64 Never Reused: No 00:09:22.193 Namespace Write Protected: No 00:09:22.193 Number of LBA Formats: 8 00:09:22.193 Current LBA Format: LBA Format #07 00:09:22.193 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.193 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.193 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.193 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.193 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.193 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.193 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.193 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.193 00:09:22.193 NVM Specific Namespace Data 00:09:22.193 =========================== 00:09:22.193 Logical Block Storage Tag Mask: 0 00:09:22.193 Protection Information Capabilities: 00:09:22.193 16b Guard Protection Information Storage Tag Support: No 00:09:22.193 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.193 Storage Tag Check Read Support: No 00:09:22.193 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.193 ===================================================== 00:09:22.193 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.193 ===================================================== 00:09:22.193 Controller Capabilities/Features 00:09:22.193 ================================ 00:09:22.193 Vendor ID: 1b36 00:09:22.193 Subsystem Vendor ID: 1af4 00:09:22.193 Serial Number: 12341 00:09:22.193 Model Number: QEMU NVMe Ctrl 00:09:22.193 Firmware Version: 8.0.0 00:09:22.193 Recommended Arb Burst: 6 00:09:22.193 IEEE OUI Identifier: 00 54 52 00:09:22.193 Multi-path I/O 00:09:22.193 May have multiple subsystem ports: No 00:09:22.193 May have multiple controllers: No 00:09:22.193 Associated with SR-IOV VF: No 00:09:22.193 Max Data Transfer Size: 524288 00:09:22.193 Max Number of Namespaces: 256 00:09:22.193 Max Number of I/O Queues: 64 00:09:22.193 NVMe Specification Version (VS): 1.4 00:09:22.193 NVMe Specification Version (Identify): 1.4 00:09:22.193 Maximum Queue Entries: 2048 00:09:22.193 Contiguous Queues Required: Yes 00:09:22.193 Arbitration Mechanisms Supported 00:09:22.193 Weighted Round Robin: Not Supported 00:09:22.193 Vendor Specific: Not Supported 00:09:22.193 Reset Timeout: 7500 ms 00:09:22.193 Doorbell Stride: 4 bytes 00:09:22.193 NVM Subsystem Reset: Not Supported 00:09:22.193 Command Sets Supported 00:09:22.193 NVM Command Set: Supported 00:09:22.193 Boot Partition: Not Supported 00:09:22.193 Memory Page Size Minimum: 4096 bytes 00:09:22.193 Memory Page Size Maximum: 65536 bytes 00:09:22.193 Persistent Memory Region: Not Supported 00:09:22.193 Optional Asynchronous Events Supported 00:09:22.193 Namespace Attribute Notices: Supported 00:09:22.193 Firmware Activation Notices: Not Supported 00:09:22.193 ANA Change Notices: Not Supported 00:09:22.193 PLE Aggregate Log Change Notices: Not Supported 00:09:22.193 LBA Status Info Alert Notices: Not Supported 00:09:22.193 EGE Aggregate Log Change Notices: Not Supported 00:09:22.193 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.193 Zone Descriptor Change Notices: Not Supported 00:09:22.193 Discovery Log Change Notices: Not Supported 00:09:22.193 Controller Attributes 00:09:22.193 128-bit Host Identifier: Not Supported 00:09:22.193 Non-Operational Permissive Mode: Not Supported 00:09:22.193 NVM Sets: Not Supported 00:09:22.193 Read Recovery Levels: Not Supported 00:09:22.193 Endurance Groups: Not Supported 00:09:22.193 Predictable Latency Mode: Not Supported 00:09:22.193 Traffic Based Keep ALive: Not Supported 00:09:22.193 Namespace Granularity: Not Supported 00:09:22.193 SQ Associations: Not Supported 00:09:22.193 UUID List: Not Supported 00:09:22.193 Multi-Domain Subsystem: Not Supported 00:09:22.194 Fixed Capacity Management: Not Supported 00:09:22.194 Variable Capacity Management: Not Supported 00:09:22.194 Delete Endurance Group: Not Supported 00:09:22.194 Delete NVM Set: Not Supported 00:09:22.194 Extended LBA Formats Supported: Supported 00:09:22.194 Flexible Data Placement Supported: Not Supported 00:09:22.194 00:09:22.194 Controller Memory Buffer Support 00:09:22.194 ================================ 00:09:22.194 Supported: No 00:09:22.194 00:09:22.194 Persistent Memory Region Support 00:09:22.194 ================================ 00:09:22.194 Supported: No 00:09:22.194 00:09:22.194 Admin Command Set Attributes 00:09:22.194 ============================ 00:09:22.194 Security Send/Receive: Not Supported 00:09:22.194 Format NVM: Supported 00:09:22.194 Firmware Activate/Download: Not Supported 00:09:22.194 Namespace Management: Supported 00:09:22.194 Device Self-Test: Not Supported 00:09:22.194 Directives: Supported 00:09:22.194 NVMe-MI: Not Supported 00:09:22.194 Virtualization Management: Not Supported 00:09:22.194 Doorbell Buffer Config: Supported 00:09:22.194 Get LBA Status Capability: Not Supported 00:09:22.194 Command & Feature Lockdown Capability: Not Supported 00:09:22.194 Abort Command Limit: 4 00:09:22.194 Async Event Request Limit: 4 00:09:22.194 Number of Firmware Slots: N/A 00:09:22.194 Firmware Slot 1 Read-Only: N/A 00:09:22.194 Firmware Activation Without Reset: N/A 00:09:22.194 Multiple Update Detection Support: N/A 00:09:22.194 Firmware Update Granularity: No Information Provided 00:09:22.194 Per-Namespace SMART Log: Yes 00:09:22.194 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.194 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:22.194 Command Effects Log Page: Supported 00:09:22.194 Get Log Page Extended Data: Supported 00:09:22.194 Telemetry Log Pages: Not Supported 00:09:22.194 Persistent Event Log Pages: Not Supported 00:09:22.194 Supported Log Pages Log Page: May Support 00:09:22.194 Commands Supported & Effects Log Page: Not Supported 00:09:22.194 Feature Identifiers & Effects Log Page:May Support 00:09:22.194 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.194 Data Area 4 for Telemetry Log: Not Supported 00:09:22.194 Error Log Page Entries Supported: 1 00:09:22.194 Keep Alive: Not Supported 00:09:22.194 00:09:22.194 NVM Command Set Attributes 00:09:22.194 ========================== 00:09:22.194 Submission Queue Entry Size 00:09:22.194 Max: 64 00:09:22.194 Min: 64 00:09:22.194 Completion Queue Entry Size 00:09:22.194 Max: 16 00:09:22.194 Min: 16 00:09:22.194 Number of Namespaces: 256 00:09:22.194 Compare Command: Supported 00:09:22.194 Write Uncorrectable Command: Not Supported 00:09:22.194 Dataset Management Command: Supported 00:09:22.194 Write Zeroes Command: Supported 00:09:22.194 Set Features Save Field: Supported 00:09:22.194 Reservations: Not Supported 00:09:22.194 Timestamp: Supported 00:09:22.194 Copy: Supported 00:09:22.194 Volatile Write Cache: Present 00:09:22.194 Atomic Write Unit (Normal): 1 00:09:22.194 Atomic Write Unit (PFail): 1 00:09:22.194 Atomic Compare & Write Unit: 1 00:09:22.194 Fused Compare & Write: Not Supported 00:09:22.194 Scatter-Gather List 00:09:22.194 SGL Command Set: Supported 00:09:22.194 SGL Keyed: Not Supported 00:09:22.194 SGL Bit Bucket Descriptor: Not Supported 00:09:22.194 SGL Metadata Pointer: Not Supported 00:09:22.194 Oversized SGL: Not Supported 00:09:22.194 SGL Metadata Address: Not Supported 00:09:22.194 SGL Offset: Not Supported 00:09:22.194 Transport SGL Data Block: Not Supported 00:09:22.194 Replay Protected Memory Block: Not Supported 00:09:22.194 00:09:22.194 Firmware Slot Information 00:09:22.194 ========================= 00:09:22.194 Active slot: 1 00:09:22.194 Slot 1 Firmware Revision: 1.0 00:09:22.194 00:09:22.194 00:09:22.194 Commands Supported and Effects 00:09:22.194 ============================== 00:09:22.194 Admin Commands 00:09:22.194 -------------- 00:09:22.194 Delete I/O Submission Queue (00h): Supported 00:09:22.194 Create I/O Submission Queue (01h): Supported 00:09:22.194 Get Log Page (02h): Supported 00:09:22.194 Delete I/O Completion Queue (04h): Supported 00:09:22.194 Create I/O Completion Queue (05h): Supported 00:09:22.194 Identify (06h): Supported 00:09:22.194 Abort (08h): Supported 00:09:22.194 Set Features (09h): Supported 00:09:22.194 Get Features (0Ah): Supported 00:09:22.194 Asynchronous Event Request (0Ch): Supported 00:09:22.194 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.194 Directive Send (19h): Supported 00:09:22.194 Directive Receive (1Ah): Supported 00:09:22.194 Virtualization Management (1Ch): Supported 00:09:22.194 Doorbell Buffer Config (7Ch): Supported 00:09:22.194 Format NVM (80h): Supported LBA-Change 00:09:22.194 I/O Commands 00:09:22.194 ------------ 00:09:22.194 Flush (00h): Supported LBA-Change 00:09:22.194 Write (01h): Supported LBA-Change 00:09:22.194 Read (02h): Supported 00:09:22.194 Compare (05h): Supported 00:09:22.194 Write Zeroes (08h): Supported LBA-Change 00:09:22.194 Dataset Management (09h): Supported LBA-Change 00:09:22.194 Unknown (0Ch): Supported 00:09:22.194 Unknown (12h): Supported 00:09:22.194 Copy (19h): Supported LBA-Change 00:09:22.194 Unknown (1Dh): Supported LBA-Change 00:09:22.194 00:09:22.194 Error Log 00:09:22.194 ========= 00:09:22.194 00:09:22.194 Arbitration 00:09:22.194 =========== 00:09:22.194 Arbitration Burst: no limit 00:09:22.194 00:09:22.194 Power Management 00:09:22.194 ================ 00:09:22.194 Number of Power States: 1 00:09:22.194 Current Power State: Power State #0 00:09:22.194 Power State #0: 00:09:22.194 Max Power: 25.00 W 00:09:22.194 Non-Operational State: Operational 00:09:22.194 Entry Latency: 16 microseconds 00:09:22.194 Exit Latency: 4 microseconds 00:09:22.194 Relative Read Throughput: 0 00:09:22.194 Relative Read Latency: 0 00:09:22.194 Relative Write Throughput: 0 00:09:22.194 Relative Write Latency: 0 00:09:22.194 Idle Power: Not Reported 00:09:22.194 Active Power: Not Reported 00:09:22.194 Non-Operational Permissive Mode: Not Supported 00:09:22.194 00:09:22.194 Health Information 00:09:22.194 ================== 00:09:22.194 Critical Warnings: 00:09:22.194 Available Spare Space: OK 00:09:22.194 Temperature: [2024-07-24 09:34:59.848730] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 79679 terminated unexpected 00:09:22.194 OK 00:09:22.194 Device Reliability: OK 00:09:22.194 Read Only: No 00:09:22.194 Volatile Memory Backup: OK 00:09:22.194 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.194 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.194 Available Spare: 0% 00:09:22.194 Available Spare Threshold: 0% 00:09:22.194 Life Percentage Used: 0% 00:09:22.194 Data Units Read: 1286 00:09:22.194 Data Units Written: 1064 00:09:22.194 Host Read Commands: 59446 00:09:22.194 Host Write Commands: 56436 00:09:22.194 Controller Busy Time: 0 minutes 00:09:22.194 Power Cycles: 0 00:09:22.194 Power On Hours: 0 hours 00:09:22.194 Unsafe Shutdowns: 0 00:09:22.194 Unrecoverable Media Errors: 0 00:09:22.194 Lifetime Error Log Entries: 0 00:09:22.194 Warning Temperature Time: 0 minutes 00:09:22.194 Critical Temperature Time: 0 minutes 00:09:22.194 00:09:22.194 Number of Queues 00:09:22.194 ================ 00:09:22.194 Number of I/O Submission Queues: 64 00:09:22.194 Number of I/O Completion Queues: 64 00:09:22.194 00:09:22.194 ZNS Specific Controller Data 00:09:22.194 ============================ 00:09:22.194 Zone Append Size Limit: 0 00:09:22.194 00:09:22.194 00:09:22.194 Active Namespaces 00:09:22.194 ================= 00:09:22.194 Namespace ID:1 00:09:22.194 Error Recovery Timeout: Unlimited 00:09:22.195 Command Set Identifier: NVM (00h) 00:09:22.195 Deallocate: Supported 00:09:22.195 Deallocated/Unwritten Error: Supported 00:09:22.195 Deallocated Read Value: All 0x00 00:09:22.195 Deallocate in Write Zeroes: Not Supported 00:09:22.195 Deallocated Guard Field: 0xFFFF 00:09:22.195 Flush: Supported 00:09:22.195 Reservation: Not Supported 00:09:22.195 Namespace Sharing Capabilities: Private 00:09:22.195 Size (in LBAs): 1310720 (5GiB) 00:09:22.195 Capacity (in LBAs): 1310720 (5GiB) 00:09:22.195 Utilization (in LBAs): 1310720 (5GiB) 00:09:22.195 Thin Provisioning: Not Supported 00:09:22.195 Per-NS Atomic Units: No 00:09:22.195 Maximum Single Source Range Length: 128 00:09:22.195 Maximum Copy Length: 128 00:09:22.195 Maximum Source Range Count: 128 00:09:22.195 NGUID/EUI64 Never Reused: No 00:09:22.195 Namespace Write Protected: No 00:09:22.195 Number of LBA Formats: 8 00:09:22.195 Current LBA Format: LBA Format #04 00:09:22.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.195 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.195 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.195 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.195 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.195 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.195 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.195 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.195 00:09:22.195 NVM Specific Namespace Data 00:09:22.195 =========================== 00:09:22.195 Logical Block Storage Tag Mask: 0 00:09:22.195 Protection Information Capabilities: 00:09:22.195 16b Guard Protection Information Storage Tag Support: No 00:09:22.195 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.195 Storage Tag Check Read Support: No 00:09:22.195 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.195 ===================================================== 00:09:22.195 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.195 ===================================================== 00:09:22.195 Controller Capabilities/Features 00:09:22.195 ================================ 00:09:22.195 Vendor ID: 1b36 00:09:22.195 Subsystem Vendor ID: 1af4 00:09:22.195 Serial Number: 12343 00:09:22.195 Model Number: QEMU NVMe Ctrl 00:09:22.195 Firmware Version: 8.0.0 00:09:22.195 Recommended Arb Burst: 6 00:09:22.195 IEEE OUI Identifier: 00 54 52 00:09:22.195 Multi-path I/O 00:09:22.195 May have multiple subsystem ports: No 00:09:22.195 May have multiple controllers: Yes 00:09:22.195 Associated with SR-IOV VF: No 00:09:22.195 Max Data Transfer Size: 524288 00:09:22.195 Max Number of Namespaces: 256 00:09:22.195 Max Number of I/O Queues: 64 00:09:22.195 NVMe Specification Version (VS): 1.4 00:09:22.195 NVMe Specification Version (Identify): 1.4 00:09:22.195 Maximum Queue Entries: 2048 00:09:22.195 Contiguous Queues Required: Yes 00:09:22.195 Arbitration Mechanisms Supported 00:09:22.195 Weighted Round Robin: Not Supported 00:09:22.195 Vendor Specific: Not Supported 00:09:22.195 Reset Timeout: 7500 ms 00:09:22.195 Doorbell Stride: 4 bytes 00:09:22.195 NVM Subsystem Reset: Not Supported 00:09:22.195 Command Sets Supported 00:09:22.195 NVM Command Set: Supported 00:09:22.195 Boot Partition: Not Supported 00:09:22.195 Memory Page Size Minimum: 4096 bytes 00:09:22.195 Memory Page Size Maximum: 65536 bytes 00:09:22.195 Persistent Memory Region: Not Supported 00:09:22.195 Optional Asynchronous Events Supported 00:09:22.195 Namespace Attribute Notices: Supported 00:09:22.195 Firmware Activation Notices: Not Supported 00:09:22.195 ANA Change Notices: Not Supported 00:09:22.195 PLE Aggregate Log Change Notices: Not Supported 00:09:22.195 LBA Status Info Alert Notices: Not Supported 00:09:22.195 EGE Aggregate Log Change Notices: Not Supported 00:09:22.195 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.195 Zone Descriptor Change Notices: Not Supported 00:09:22.195 Discovery Log Change Notices: Not Supported 00:09:22.195 Controller Attributes 00:09:22.195 128-bit Host Identifier: Not Supported 00:09:22.195 Non-Operational Permissive Mode: Not Supported 00:09:22.195 NVM Sets: Not Supported 00:09:22.195 Read Recovery Levels: Not Supported 00:09:22.195 Endurance Groups: Supported 00:09:22.195 Predictable Latency Mode: Not Supported 00:09:22.195 Traffic Based Keep ALive: Not Supported 00:09:22.195 Namespace Granularity: Not Supported 00:09:22.195 SQ Associations: Not Supported 00:09:22.195 UUID List: Not Supported 00:09:22.195 Multi-Domain Subsystem: Not Supported 00:09:22.195 Fixed Capacity Management: Not Supported 00:09:22.195 Variable Capacity Management: Not Supported 00:09:22.195 Delete Endurance Group: Not Supported 00:09:22.195 Delete NVM Set: Not Supported 00:09:22.195 Extended LBA Formats Supported: Supported 00:09:22.195 Flexible Data Placement Supported: Supported 00:09:22.195 00:09:22.195 Controller Memory Buffer Support 00:09:22.195 ================================ 00:09:22.195 Supported: No 00:09:22.195 00:09:22.195 Persistent Memory Region Support 00:09:22.195 ================================ 00:09:22.195 Supported: No 00:09:22.195 00:09:22.195 Admin Command Set Attributes 00:09:22.195 ============================ 00:09:22.195 Security Send/Receive: Not Supported 00:09:22.195 Format NVM: Supported 00:09:22.195 Firmware Activate/Download: Not Supported 00:09:22.195 Namespace Management: Supported 00:09:22.195 Device Self-Test: Not Supported 00:09:22.195 Directives: Supported 00:09:22.195 NVMe-MI: Not Supported 00:09:22.195 Virtualization Management: Not Supported 00:09:22.195 Doorbell Buffer Config: Supported 00:09:22.195 Get LBA Status Capability: Not Supported 00:09:22.195 Command & Feature Lockdown Capability: Not Supported 00:09:22.195 Abort Command Limit: 4 00:09:22.195 Async Event Request Limit: 4 00:09:22.195 Number of Firmware Slots: N/A 00:09:22.195 Firmware Slot 1 Read-Only: N/A 00:09:22.195 Firmware Activation Without Reset: N/A 00:09:22.195 Multiple Update Detection Support: N/A 00:09:22.195 Firmware Update Granularity: No Information Provided 00:09:22.195 Per-Namespace SMART Log: Yes 00:09:22.195 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.195 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:22.195 Command Effects Log Page: Supported 00:09:22.195 Get Log Page Extended Data: Supported 00:09:22.195 Telemetry Log Pages: Not Supported 00:09:22.195 Persistent Event Log Pages: Not Supported 00:09:22.195 Supported Log Pages Log Page: May Support 00:09:22.195 Commands Supported & Effects Log Page: Not Supported 00:09:22.195 Feature Identifiers & Effects Log Page:May Support 00:09:22.195 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.195 Data Area 4 for Telemetry Log: Not Supported 00:09:22.195 Error Log Page Entries Supported: 1 00:09:22.195 Keep Alive: Not Supported 00:09:22.195 00:09:22.195 NVM Command Set Attributes 00:09:22.195 ========================== 00:09:22.195 Submission Queue Entry Size 00:09:22.195 Max: 64 00:09:22.195 Min: 64 00:09:22.195 Completion Queue Entry Size 00:09:22.195 Max: 16 00:09:22.195 Min: 16 00:09:22.195 Number of Namespaces: 256 00:09:22.195 Compare Command: Supported 00:09:22.195 Write Uncorrectable Command: Not Supported 00:09:22.195 Dataset Management Command: Supported 00:09:22.195 Write Zeroes Command: Supported 00:09:22.195 Set Features Save Field: Supported 00:09:22.195 Reservations: Not Supported 00:09:22.195 Timestamp: Supported 00:09:22.195 Copy: Supported 00:09:22.195 Volatile Write Cache: Present 00:09:22.195 Atomic Write Unit (Normal): 1 00:09:22.195 Atomic Write Unit (PFail): 1 00:09:22.196 Atomic Compare & Write Unit: 1 00:09:22.196 Fused Compare & Write: Not Supported 00:09:22.196 Scatter-Gather List 00:09:22.196 SGL Command Set: Supported 00:09:22.196 SGL Keyed: Not Supported 00:09:22.196 SGL Bit Bucket Descriptor: Not Supported 00:09:22.196 SGL Metadata Pointer: Not Supported 00:09:22.196 Oversized SGL: Not Supported 00:09:22.196 SGL Metadata Address: Not Supported 00:09:22.196 SGL Offset: Not Supported 00:09:22.196 Transport SGL Data Block: Not Supported 00:09:22.196 Replay Protected Memory Block: Not Supported 00:09:22.196 00:09:22.196 Firmware Slot Information 00:09:22.196 ========================= 00:09:22.196 Active slot: 1 00:09:22.196 Slot 1 Firmware Revision: 1.0 00:09:22.196 00:09:22.196 00:09:22.196 Commands Supported and Effects 00:09:22.196 ============================== 00:09:22.196 Admin Commands 00:09:22.196 -------------- 00:09:22.196 Delete I/O Submission Queue (00h): Supported 00:09:22.196 Create I/O Submission Queue (01h): Supported 00:09:22.196 Get Log Page (02h): Supported 00:09:22.196 Delete I/O Completion Queue (04h): Supported 00:09:22.196 Create I/O Completion Queue (05h): Supported 00:09:22.196 Identify (06h): Supported 00:09:22.196 Abort (08h): Supported 00:09:22.196 Set Features (09h): Supported 00:09:22.196 Get Features (0Ah): Supported 00:09:22.196 Asynchronous Event Request (0Ch): Supported 00:09:22.196 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.196 Directive Send (19h): Supported 00:09:22.196 Directive Receive (1Ah): Supported 00:09:22.196 Virtualization Management (1Ch): Supported 00:09:22.196 Doorbell Buffer Config (7Ch): Supported 00:09:22.196 Format NVM (80h): Supported LBA-Change 00:09:22.196 I/O Commands 00:09:22.196 ------------ 00:09:22.196 Flush (00h): Supported LBA-Change 00:09:22.196 Write (01h): Supported LBA-Change 00:09:22.196 Read (02h): Supported 00:09:22.196 Compare (05h): Supported 00:09:22.196 Write Zeroes (08h): Supported LBA-Change 00:09:22.196 Dataset Management (09h): Supported LBA-Change 00:09:22.196 Unknown (0Ch): Supported 00:09:22.196 Unknown (12h): Supported 00:09:22.196 Copy (19h): Supported LBA-Change 00:09:22.196 Unknown (1Dh): Supported LBA-Change 00:09:22.196 00:09:22.196 Error Log 00:09:22.196 ========= 00:09:22.196 00:09:22.196 Arbitration 00:09:22.196 =========== 00:09:22.196 Arbitration Burst: no limit 00:09:22.196 00:09:22.196 Power Management 00:09:22.196 ================ 00:09:22.196 Number of Power States: 1 00:09:22.196 Current Power State: Power State #0 00:09:22.196 Power State #0: 00:09:22.196 Max Power: 25.00 W 00:09:22.196 Non-Operational State: Operational 00:09:22.196 Entry Latency: 16 microseconds 00:09:22.196 Exit Latency: 4 microseconds 00:09:22.196 Relative Read Throughput: 0 00:09:22.196 Relative Read Latency: 0 00:09:22.196 Relative Write Throughput: 0 00:09:22.196 Relative Write Latency: 0 00:09:22.196 Idle Power: Not Reported 00:09:22.196 Active Power: Not Reported 00:09:22.196 Non-Operational Permissive Mode: Not Supported 00:09:22.196 00:09:22.196 Health Information 00:09:22.196 ================== 00:09:22.196 Critical Warnings: 00:09:22.196 Available Spare Space: OK 00:09:22.196 Temperature: OK 00:09:22.196 Device Reliability: OK 00:09:22.196 Read Only: No 00:09:22.196 Volatile Memory Backup: OK 00:09:22.196 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.196 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.196 Available Spare: 0% 00:09:22.196 Available Spare Threshold: 0% 00:09:22.196 Life Percentage Used: 0% 00:09:22.196 Data Units Read: 920 00:09:22.196 Data Units Written: 813 00:09:22.196 Host Read Commands: 40879 00:09:22.196 Host Write Commands: 39469 00:09:22.196 Controller Busy Time: 0 minutes 00:09:22.196 Power Cycles: 0 00:09:22.196 Power On Hours: 0 hours 00:09:22.196 Unsafe Shutdowns: 0 00:09:22.196 Unrecoverable Media Errors: 0 00:09:22.196 Lifetime Error Log Entries: 0 00:09:22.196 Warning Temperature Time: 0 minutes 00:09:22.196 Critical Temperature Time: 0 minutes 00:09:22.196 00:09:22.196 Number of Queues 00:09:22.196 ================ 00:09:22.196 Number of I/O Submission Queues: 64 00:09:22.196 Number of I/O Completion Queues: 64 00:09:22.196 00:09:22.196 ZNS Specific Controller Data 00:09:22.196 ============================ 00:09:22.196 Zone Append Size Limit: 0 00:09:22.196 00:09:22.196 00:09:22.196 Active Namespaces 00:09:22.196 ================= 00:09:22.196 Namespace ID:1 00:09:22.196 Error Recovery Timeout: Unlimited 00:09:22.196 Command Set Identifier: NVM (00h) 00:09:22.196 Deallocate: Supported 00:09:22.196 Deallocated/Unwritten Error: Supported 00:09:22.196 Deallocated Read Value: All 0x00 00:09:22.196 Deallocate in Write Zeroes: Not Supported 00:09:22.196 Deallocated Guard Field: 0xFFFF 00:09:22.196 Flush: Supported 00:09:22.196 Reservation: Not Supported 00:09:22.196 Namespace Sharing Capabilities: Multiple Controllers 00:09:22.196 Size (in LBAs): 262144 (1GiB) 00:09:22.196 Capacity (in LBAs): 262144 (1GiB) 00:09:22.196 Utilization (in LBAs): 262144 (1GiB) 00:09:22.196 Thin Provisioning: Not Supported 00:09:22.196 Per-NS Atomic Units: No 00:09:22.196 Maximum Single Source Range Length: 128 00:09:22.196 Maximum Copy Length: 128 00:09:22.196 Maximum Source Range Count: 128 00:09:22.196 NGUID/EUI64 Never Reused: No 00:09:22.196 Namespace Write Protected: No 00:09:22.196 Endurance group ID: 1 00:09:22.196 Number of LBA Formats: 8 00:09:22.196 Current LBA Format: LBA Format #04 00:09:22.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.196 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.196 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.196 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.196 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.196 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.196 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.196 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.196 00:09:22.196 Get Feature FDP: 00:09:22.196 ================ 00:09:22.196 Enabled: Yes 00:09:22.196 FDP configuration index: 0 00:09:22.196 00:09:22.196 FDP configurations log page 00:09:22.196 =========================== 00:09:22.196 Number of FDP configurations: 1 00:09:22.196 Version: 0 00:09:22.196 Size: 112 00:09:22.196 FDP Configuration Descriptor: 0 00:09:22.196 Descriptor Size: 96 00:09:22.196 Reclaim Group Identifier format: 2 00:09:22.196 FDP Volatile Write Cache: Not Present 00:09:22.196 FDP Configuration: Valid 00:09:22.196 Vendor Specific Size: 0 00:09:22.196 Number of Reclaim Groups: 2 00:09:22.196 Number of Recalim Unit Handles: 8 00:09:22.196 Max Placement Identifiers: 128 00:09:22.196 Number of Namespaces Suppprted: 256 00:09:22.196 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.196 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.196 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.196 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.196 00:09:22.196 FDP reclaim unit handle usage log page 00:09:22.196 ====================================== 00:09:22.196 Number of Reclaim Unit Handles: 8 00:09:22.196 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.196 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.196 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.196 00:09:22.196 FDP statistics log page 00:09:22.196 ======================= 00:09:22.196 Host bytes with metadata written: 490381312 00:09:22.196 Med[2024-07-24 09:34:59.850416] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 79679 terminated unexpected 00:09:22.196 ia bytes with metadata written: 490434560 00:09:22.196 Media bytes erased: 0 00:09:22.196 00:09:22.196 FDP events log page 00:09:22.196 =================== 00:09:22.196 Number of FDP events: 0 00:09:22.196 00:09:22.196 NVM Specific Namespace Data 00:09:22.196 =========================== 00:09:22.197 Logical Block Storage Tag Mask: 0 00:09:22.197 Protection Information Capabilities: 00:09:22.197 16b Guard Protection Information Storage Tag Support: No 00:09:22.197 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.197 Storage Tag Check Read Support: No 00:09:22.197 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.197 ===================================================== 00:09:22.197 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.197 ===================================================== 00:09:22.197 Controller Capabilities/Features 00:09:22.197 ================================ 00:09:22.197 Vendor ID: 1b36 00:09:22.197 Subsystem Vendor ID: 1af4 00:09:22.197 Serial Number: 12342 00:09:22.197 Model Number: QEMU NVMe Ctrl 00:09:22.197 Firmware Version: 8.0.0 00:09:22.197 Recommended Arb Burst: 6 00:09:22.197 IEEE OUI Identifier: 00 54 52 00:09:22.197 Multi-path I/O 00:09:22.197 May have multiple subsystem ports: No 00:09:22.197 May have multiple controllers: No 00:09:22.197 Associated with SR-IOV VF: No 00:09:22.197 Max Data Transfer Size: 524288 00:09:22.197 Max Number of Namespaces: 256 00:09:22.197 Max Number of I/O Queues: 64 00:09:22.197 NVMe Specification Version (VS): 1.4 00:09:22.197 NVMe Specification Version (Identify): 1.4 00:09:22.197 Maximum Queue Entries: 2048 00:09:22.197 Contiguous Queues Required: Yes 00:09:22.197 Arbitration Mechanisms Supported 00:09:22.197 Weighted Round Robin: Not Supported 00:09:22.197 Vendor Specific: Not Supported 00:09:22.197 Reset Timeout: 7500 ms 00:09:22.197 Doorbell Stride: 4 bytes 00:09:22.197 NVM Subsystem Reset: Not Supported 00:09:22.197 Command Sets Supported 00:09:22.197 NVM Command Set: Supported 00:09:22.197 Boot Partition: Not Supported 00:09:22.197 Memory Page Size Minimum: 4096 bytes 00:09:22.197 Memory Page Size Maximum: 65536 bytes 00:09:22.197 Persistent Memory Region: Not Supported 00:09:22.197 Optional Asynchronous Events Supported 00:09:22.197 Namespace Attribute Notices: Supported 00:09:22.197 Firmware Activation Notices: Not Supported 00:09:22.197 ANA Change Notices: Not Supported 00:09:22.197 PLE Aggregate Log Change Notices: Not Supported 00:09:22.197 LBA Status Info Alert Notices: Not Supported 00:09:22.197 EGE Aggregate Log Change Notices: Not Supported 00:09:22.197 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.197 Zone Descriptor Change Notices: Not Supported 00:09:22.197 Discovery Log Change Notices: Not Supported 00:09:22.197 Controller Attributes 00:09:22.197 128-bit Host Identifier: Not Supported 00:09:22.197 Non-Operational Permissive Mode: Not Supported 00:09:22.197 NVM Sets: Not Supported 00:09:22.197 Read Recovery Levels: Not Supported 00:09:22.197 Endurance Groups: Not Supported 00:09:22.197 Predictable Latency Mode: Not Supported 00:09:22.197 Traffic Based Keep ALive: Not Supported 00:09:22.197 Namespace Granularity: Not Supported 00:09:22.197 SQ Associations: Not Supported 00:09:22.197 UUID List: Not Supported 00:09:22.197 Multi-Domain Subsystem: Not Supported 00:09:22.197 Fixed Capacity Management: Not Supported 00:09:22.197 Variable Capacity Management: Not Supported 00:09:22.197 Delete Endurance Group: Not Supported 00:09:22.197 Delete NVM Set: Not Supported 00:09:22.197 Extended LBA Formats Supported: Supported 00:09:22.197 Flexible Data Placement Supported: Not Supported 00:09:22.197 00:09:22.197 Controller Memory Buffer Support 00:09:22.197 ================================ 00:09:22.197 Supported: No 00:09:22.197 00:09:22.197 Persistent Memory Region Support 00:09:22.197 ================================ 00:09:22.197 Supported: No 00:09:22.197 00:09:22.197 Admin Command Set Attributes 00:09:22.197 ============================ 00:09:22.197 Security Send/Receive: Not Supported 00:09:22.197 Format NVM: Supported 00:09:22.197 Firmware Activate/Download: Not Supported 00:09:22.197 Namespace Management: Supported 00:09:22.197 Device Self-Test: Not Supported 00:09:22.197 Directives: Supported 00:09:22.197 NVMe-MI: Not Supported 00:09:22.197 Virtualization Management: Not Supported 00:09:22.197 Doorbell Buffer Config: Supported 00:09:22.197 Get LBA Status Capability: Not Supported 00:09:22.197 Command & Feature Lockdown Capability: Not Supported 00:09:22.197 Abort Command Limit: 4 00:09:22.197 Async Event Request Limit: 4 00:09:22.197 Number of Firmware Slots: N/A 00:09:22.197 Firmware Slot 1 Read-Only: N/A 00:09:22.197 Firmware Activation Without Reset: N/A 00:09:22.197 Multiple Update Detection Support: N/A 00:09:22.197 Firmware Update Granularity: No Information Provided 00:09:22.197 Per-Namespace SMART Log: Yes 00:09:22.197 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.197 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:22.197 Command Effects Log Page: Supported 00:09:22.197 Get Log Page Extended Data: Supported 00:09:22.197 Telemetry Log Pages: Not Supported 00:09:22.197 Persistent Event Log Pages: Not Supported 00:09:22.197 Supported Log Pages Log Page: May Support 00:09:22.197 Commands Supported & Effects Log Page: Not Supported 00:09:22.197 Feature Identifiers & Effects Log Page:May Support 00:09:22.197 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.197 Data Area 4 for Telemetry Log: Not Supported 00:09:22.197 Error Log Page Entries Supported: 1 00:09:22.197 Keep Alive: Not Supported 00:09:22.197 00:09:22.197 NVM Command Set Attributes 00:09:22.197 ========================== 00:09:22.197 Submission Queue Entry Size 00:09:22.197 Max: 64 00:09:22.197 Min: 64 00:09:22.197 Completion Queue Entry Size 00:09:22.197 Max: 16 00:09:22.197 Min: 16 00:09:22.197 Number of Namespaces: 256 00:09:22.197 Compare Command: Supported 00:09:22.197 Write Uncorrectable Command: Not Supported 00:09:22.197 Dataset Management Command: Supported 00:09:22.197 Write Zeroes Command: Supported 00:09:22.197 Set Features Save Field: Supported 00:09:22.197 Reservations: Not Supported 00:09:22.197 Timestamp: Supported 00:09:22.197 Copy: Supported 00:09:22.197 Volatile Write Cache: Present 00:09:22.197 Atomic Write Unit (Normal): 1 00:09:22.197 Atomic Write Unit (PFail): 1 00:09:22.197 Atomic Compare & Write Unit: 1 00:09:22.197 Fused Compare & Write: Not Supported 00:09:22.197 Scatter-Gather List 00:09:22.198 SGL Command Set: Supported 00:09:22.198 SGL Keyed: Not Supported 00:09:22.198 SGL Bit Bucket Descriptor: Not Supported 00:09:22.198 SGL Metadata Pointer: Not Supported 00:09:22.198 Oversized SGL: Not Supported 00:09:22.198 SGL Metadata Address: Not Supported 00:09:22.198 SGL Offset: Not Supported 00:09:22.198 Transport SGL Data Block: Not Supported 00:09:22.198 Replay Protected Memory Block: Not Supported 00:09:22.198 00:09:22.198 Firmware Slot Information 00:09:22.198 ========================= 00:09:22.198 Active slot: 1 00:09:22.198 Slot 1 Firmware Revision: 1.0 00:09:22.198 00:09:22.198 00:09:22.198 Commands Supported and Effects 00:09:22.198 ============================== 00:09:22.198 Admin Commands 00:09:22.198 -------------- 00:09:22.198 Delete I/O Submission Queue (00h): Supported 00:09:22.198 Create I/O Submission Queue (01h): Supported 00:09:22.198 Get Log Page (02h): Supported 00:09:22.198 Delete I/O Completion Queue (04h): Supported 00:09:22.198 Create I/O Completion Queue (05h): Supported 00:09:22.198 Identify (06h): Supported 00:09:22.198 Abort (08h): Supported 00:09:22.198 Set Features (09h): Supported 00:09:22.198 Get Features (0Ah): Supported 00:09:22.198 Asynchronous Event Request (0Ch): Supported 00:09:22.198 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.198 Directive Send (19h): Supported 00:09:22.198 Directive Receive (1Ah): Supported 00:09:22.198 Virtualization Management (1Ch): Supported 00:09:22.198 Doorbell Buffer Config (7Ch): Supported 00:09:22.198 Format NVM (80h): Supported LBA-Change 00:09:22.198 I/O Commands 00:09:22.198 ------------ 00:09:22.198 Flush (00h): Supported LBA-Change 00:09:22.198 Write (01h): Supported LBA-Change 00:09:22.198 Read (02h): Supported 00:09:22.198 Compare (05h): Supported 00:09:22.198 Write Zeroes (08h): Supported LBA-Change 00:09:22.198 Dataset Management (09h): Supported LBA-Change 00:09:22.198 Unknown (0Ch): Supported 00:09:22.198 Unknown (12h): Supported 00:09:22.198 Copy (19h): Supported LBA-Change 00:09:22.198 Unknown (1Dh): Supported LBA-Change 00:09:22.198 00:09:22.198 Error Log 00:09:22.198 ========= 00:09:22.198 00:09:22.198 Arbitration 00:09:22.198 =========== 00:09:22.198 Arbitration Burst: no limit 00:09:22.198 00:09:22.198 Power Management 00:09:22.198 ================ 00:09:22.198 Number of Power States: 1 00:09:22.198 Current Power State: Power State #0 00:09:22.198 Power State #0: 00:09:22.198 Max Power: 25.00 W 00:09:22.198 Non-Operational State: Operational 00:09:22.198 Entry Latency: 16 microseconds 00:09:22.198 Exit Latency: 4 microseconds 00:09:22.198 Relative Read Throughput: 0 00:09:22.198 Relative Read Latency: 0 00:09:22.198 Relative Write Throughput: 0 00:09:22.198 Relative Write Latency: 0 00:09:22.198 Idle Power: Not Reported 00:09:22.198 Active Power: Not Reported 00:09:22.198 Non-Operational Permissive Mode: Not Supported 00:09:22.198 00:09:22.198 Health Information 00:09:22.198 ================== 00:09:22.198 Critical Warnings: 00:09:22.198 Available Spare Space: OK 00:09:22.198 Temperature: OK 00:09:22.198 Device Reliability: OK 00:09:22.198 Read Only: No 00:09:22.198 Volatile Memory Backup: OK 00:09:22.198 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.198 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.198 Available Spare: 0% 00:09:22.198 Available Spare Threshold: 0% 00:09:22.198 Life Percentage Used: 0% 00:09:22.198 Data Units Read: 2571 00:09:22.198 Data Units Written: 2251 00:09:22.198 Host Read Commands: 121107 00:09:22.198 Host Write Commands: 116877 00:09:22.198 Controller Busy Time: 0 minutes 00:09:22.198 Power Cycles: 0 00:09:22.198 Power On Hours: 0 hours 00:09:22.198 Unsafe Shutdowns: 0 00:09:22.198 Unrecoverable Media Errors: 0 00:09:22.198 Lifetime Error Log Entries: 0 00:09:22.198 Warning Temperature Time: 0 minutes 00:09:22.198 Critical Temperature Time: 0 minutes 00:09:22.198 00:09:22.198 Number of Queues 00:09:22.198 ================ 00:09:22.198 Number of I/O Submission Queues: 64 00:09:22.198 Number of I/O Completion Queues: 64 00:09:22.198 00:09:22.198 ZNS Specific Controller Data 00:09:22.198 ============================ 00:09:22.198 Zone Append Size Limit: 0 00:09:22.198 00:09:22.198 00:09:22.198 Active Namespaces 00:09:22.198 ================= 00:09:22.198 Namespace ID:1 00:09:22.198 Error Recovery Timeout: Unlimited 00:09:22.198 Command Set Identifier: NVM (00h) 00:09:22.198 Deallocate: Supported 00:09:22.198 Deallocated/Unwritten Error: Supported 00:09:22.198 Deallocated Read Value: All 0x00 00:09:22.198 Deallocate in Write Zeroes: Not Supported 00:09:22.198 Deallocated Guard Field: 0xFFFF 00:09:22.198 Flush: Supported 00:09:22.198 Reservation: Not Supported 00:09:22.198 Namespace Sharing Capabilities: Private 00:09:22.198 Size (in LBAs): 1048576 (4GiB) 00:09:22.198 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.198 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.198 Thin Provisioning: Not Supported 00:09:22.198 Per-NS Atomic Units: No 00:09:22.198 Maximum Single Source Range Length: 128 00:09:22.198 Maximum Copy Length: 128 00:09:22.198 Maximum Source Range Count: 128 00:09:22.198 NGUID/EUI64 Never Reused: No 00:09:22.198 Namespace Write Protected: No 00:09:22.198 Number of LBA Formats: 8 00:09:22.198 Current LBA Format: LBA Format #04 00:09:22.198 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.198 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.198 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.198 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.198 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.198 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.198 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.198 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.198 00:09:22.198 NVM Specific Namespace Data 00:09:22.198 =========================== 00:09:22.198 Logical Block Storage Tag Mask: 0 00:09:22.198 Protection Information Capabilities: 00:09:22.198 16b Guard Protection Information Storage Tag Support: No 00:09:22.198 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.198 Storage Tag Check Read Support: No 00:09:22.198 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.198 Namespace ID:2 00:09:22.198 Error Recovery Timeout: Unlimited 00:09:22.198 Command Set Identifier: NVM (00h) 00:09:22.198 Deallocate: Supported 00:09:22.198 Deallocated/Unwritten Error: Supported 00:09:22.198 Deallocated Read Value: All 0x00 00:09:22.198 Deallocate in Write Zeroes: Not Supported 00:09:22.198 Deallocated Guard Field: 0xFFFF 00:09:22.198 Flush: Supported 00:09:22.198 Reservation: Not Supported 00:09:22.198 Namespace Sharing Capabilities: Private 00:09:22.198 Size (in LBAs): 1048576 (4GiB) 00:09:22.198 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.198 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.198 Thin Provisioning: Not Supported 00:09:22.198 Per-NS Atomic Units: No 00:09:22.198 Maximum Single Source Range Length: 128 00:09:22.198 Maximum Copy Length: 128 00:09:22.198 Maximum Source Range Count: 128 00:09:22.198 NGUID/EUI64 Never Reused: No 00:09:22.198 Namespace Write Protected: No 00:09:22.198 Number of LBA Formats: 8 00:09:22.198 Current LBA Format: LBA Format #04 00:09:22.198 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.198 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.198 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.198 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.198 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.198 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.198 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.198 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.198 00:09:22.198 NVM Specific Namespace Data 00:09:22.198 =========================== 00:09:22.198 Logical Block Storage Tag Mask: 0 00:09:22.198 Protection Information Capabilities: 00:09:22.199 16b Guard Protection Information Storage Tag Support: No 00:09:22.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.199 Storage Tag Check Read Support: No 00:09:22.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Namespace ID:3 00:09:22.199 Error Recovery Timeout: Unlimited 00:09:22.199 Command Set Identifier: NVM (00h) 00:09:22.199 Deallocate: Supported 00:09:22.199 Deallocated/Unwritten Error: Supported 00:09:22.199 Deallocated Read Value: All 0x00 00:09:22.199 Deallocate in Write Zeroes: Not Supported 00:09:22.199 Deallocated Guard Field: 0xFFFF 00:09:22.199 Flush: Supported 00:09:22.199 Reservation: Not Supported 00:09:22.199 Namespace Sharing Capabilities: Private 00:09:22.199 Size (in LBAs): 1048576 (4GiB) 00:09:22.199 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.199 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.199 Thin Provisioning: Not Supported 00:09:22.199 Per-NS Atomic Units: No 00:09:22.199 Maximum Single Source Range Length: 128 00:09:22.199 Maximum Copy Length: 128 00:09:22.199 Maximum Source Range Count: 128 00:09:22.199 NGUID/EUI64 Never Reused: No 00:09:22.199 Namespace Write Protected: No 00:09:22.199 Number of LBA Formats: 8 00:09:22.199 Current LBA Format: LBA Format #04 00:09:22.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.199 00:09:22.199 NVM Specific Namespace Data 00:09:22.199 =========================== 00:09:22.199 Logical Block Storage Tag Mask: 0 00:09:22.199 Protection Information Capabilities: 00:09:22.199 16b Guard Protection Information Storage Tag Support: No 00:09:22.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.199 Storage Tag Check Read Support: No 00:09:22.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.199 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.199 09:34:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:22.458 ===================================================== 00:09:22.458 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.458 ===================================================== 00:09:22.458 Controller Capabilities/Features 00:09:22.458 ================================ 00:09:22.458 Vendor ID: 1b36 00:09:22.458 Subsystem Vendor ID: 1af4 00:09:22.458 Serial Number: 12340 00:09:22.458 Model Number: QEMU NVMe Ctrl 00:09:22.458 Firmware Version: 8.0.0 00:09:22.458 Recommended Arb Burst: 6 00:09:22.458 IEEE OUI Identifier: 00 54 52 00:09:22.458 Multi-path I/O 00:09:22.458 May have multiple subsystem ports: No 00:09:22.458 May have multiple controllers: No 00:09:22.458 Associated with SR-IOV VF: No 00:09:22.458 Max Data Transfer Size: 524288 00:09:22.458 Max Number of Namespaces: 256 00:09:22.458 Max Number of I/O Queues: 64 00:09:22.458 NVMe Specification Version (VS): 1.4 00:09:22.458 NVMe Specification Version (Identify): 1.4 00:09:22.458 Maximum Queue Entries: 2048 00:09:22.458 Contiguous Queues Required: Yes 00:09:22.458 Arbitration Mechanisms Supported 00:09:22.458 Weighted Round Robin: Not Supported 00:09:22.458 Vendor Specific: Not Supported 00:09:22.458 Reset Timeout: 7500 ms 00:09:22.458 Doorbell Stride: 4 bytes 00:09:22.458 NVM Subsystem Reset: Not Supported 00:09:22.458 Command Sets Supported 00:09:22.458 NVM Command Set: Supported 00:09:22.458 Boot Partition: Not Supported 00:09:22.458 Memory Page Size Minimum: 4096 bytes 00:09:22.458 Memory Page Size Maximum: 65536 bytes 00:09:22.458 Persistent Memory Region: Not Supported 00:09:22.458 Optional Asynchronous Events Supported 00:09:22.458 Namespace Attribute Notices: Supported 00:09:22.458 Firmware Activation Notices: Not Supported 00:09:22.458 ANA Change Notices: Not Supported 00:09:22.458 PLE Aggregate Log Change Notices: Not Supported 00:09:22.458 LBA Status Info Alert Notices: Not Supported 00:09:22.458 EGE Aggregate Log Change Notices: Not Supported 00:09:22.458 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.458 Zone Descriptor Change Notices: Not Supported 00:09:22.458 Discovery Log Change Notices: Not Supported 00:09:22.458 Controller Attributes 00:09:22.458 128-bit Host Identifier: Not Supported 00:09:22.458 Non-Operational Permissive Mode: Not Supported 00:09:22.458 NVM Sets: Not Supported 00:09:22.458 Read Recovery Levels: Not Supported 00:09:22.458 Endurance Groups: Not Supported 00:09:22.458 Predictable Latency Mode: Not Supported 00:09:22.458 Traffic Based Keep ALive: Not Supported 00:09:22.458 Namespace Granularity: Not Supported 00:09:22.459 SQ Associations: Not Supported 00:09:22.459 UUID List: Not Supported 00:09:22.459 Multi-Domain Subsystem: Not Supported 00:09:22.459 Fixed Capacity Management: Not Supported 00:09:22.459 Variable Capacity Management: Not Supported 00:09:22.459 Delete Endurance Group: Not Supported 00:09:22.459 Delete NVM Set: Not Supported 00:09:22.459 Extended LBA Formats Supported: Supported 00:09:22.459 Flexible Data Placement Supported: Not Supported 00:09:22.459 00:09:22.459 Controller Memory Buffer Support 00:09:22.459 ================================ 00:09:22.459 Supported: No 00:09:22.459 00:09:22.459 Persistent Memory Region Support 00:09:22.459 ================================ 00:09:22.459 Supported: No 00:09:22.459 00:09:22.459 Admin Command Set Attributes 00:09:22.459 ============================ 00:09:22.459 Security Send/Receive: Not Supported 00:09:22.459 Format NVM: Supported 00:09:22.459 Firmware Activate/Download: Not Supported 00:09:22.459 Namespace Management: Supported 00:09:22.459 Device Self-Test: Not Supported 00:09:22.459 Directives: Supported 00:09:22.459 NVMe-MI: Not Supported 00:09:22.459 Virtualization Management: Not Supported 00:09:22.459 Doorbell Buffer Config: Supported 00:09:22.459 Get LBA Status Capability: Not Supported 00:09:22.459 Command & Feature Lockdown Capability: Not Supported 00:09:22.459 Abort Command Limit: 4 00:09:22.459 Async Event Request Limit: 4 00:09:22.459 Number of Firmware Slots: N/A 00:09:22.459 Firmware Slot 1 Read-Only: N/A 00:09:22.459 Firmware Activation Without Reset: N/A 00:09:22.459 Multiple Update Detection Support: N/A 00:09:22.459 Firmware Update Granularity: No Information Provided 00:09:22.459 Per-Namespace SMART Log: Yes 00:09:22.459 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.459 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:22.459 Command Effects Log Page: Supported 00:09:22.459 Get Log Page Extended Data: Supported 00:09:22.459 Telemetry Log Pages: Not Supported 00:09:22.459 Persistent Event Log Pages: Not Supported 00:09:22.459 Supported Log Pages Log Page: May Support 00:09:22.459 Commands Supported & Effects Log Page: Not Supported 00:09:22.459 Feature Identifiers & Effects Log Page:May Support 00:09:22.459 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.459 Data Area 4 for Telemetry Log: Not Supported 00:09:22.459 Error Log Page Entries Supported: 1 00:09:22.459 Keep Alive: Not Supported 00:09:22.459 00:09:22.459 NVM Command Set Attributes 00:09:22.459 ========================== 00:09:22.459 Submission Queue Entry Size 00:09:22.459 Max: 64 00:09:22.459 Min: 64 00:09:22.459 Completion Queue Entry Size 00:09:22.459 Max: 16 00:09:22.459 Min: 16 00:09:22.459 Number of Namespaces: 256 00:09:22.459 Compare Command: Supported 00:09:22.459 Write Uncorrectable Command: Not Supported 00:09:22.459 Dataset Management Command: Supported 00:09:22.459 Write Zeroes Command: Supported 00:09:22.459 Set Features Save Field: Supported 00:09:22.459 Reservations: Not Supported 00:09:22.459 Timestamp: Supported 00:09:22.459 Copy: Supported 00:09:22.459 Volatile Write Cache: Present 00:09:22.459 Atomic Write Unit (Normal): 1 00:09:22.459 Atomic Write Unit (PFail): 1 00:09:22.459 Atomic Compare & Write Unit: 1 00:09:22.459 Fused Compare & Write: Not Supported 00:09:22.459 Scatter-Gather List 00:09:22.459 SGL Command Set: Supported 00:09:22.459 SGL Keyed: Not Supported 00:09:22.459 SGL Bit Bucket Descriptor: Not Supported 00:09:22.459 SGL Metadata Pointer: Not Supported 00:09:22.459 Oversized SGL: Not Supported 00:09:22.459 SGL Metadata Address: Not Supported 00:09:22.459 SGL Offset: Not Supported 00:09:22.459 Transport SGL Data Block: Not Supported 00:09:22.459 Replay Protected Memory Block: Not Supported 00:09:22.459 00:09:22.459 Firmware Slot Information 00:09:22.459 ========================= 00:09:22.459 Active slot: 1 00:09:22.459 Slot 1 Firmware Revision: 1.0 00:09:22.459 00:09:22.459 00:09:22.459 Commands Supported and Effects 00:09:22.459 ============================== 00:09:22.459 Admin Commands 00:09:22.459 -------------- 00:09:22.459 Delete I/O Submission Queue (00h): Supported 00:09:22.459 Create I/O Submission Queue (01h): Supported 00:09:22.459 Get Log Page (02h): Supported 00:09:22.459 Delete I/O Completion Queue (04h): Supported 00:09:22.459 Create I/O Completion Queue (05h): Supported 00:09:22.459 Identify (06h): Supported 00:09:22.459 Abort (08h): Supported 00:09:22.459 Set Features (09h): Supported 00:09:22.459 Get Features (0Ah): Supported 00:09:22.459 Asynchronous Event Request (0Ch): Supported 00:09:22.459 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.459 Directive Send (19h): Supported 00:09:22.459 Directive Receive (1Ah): Supported 00:09:22.459 Virtualization Management (1Ch): Supported 00:09:22.459 Doorbell Buffer Config (7Ch): Supported 00:09:22.459 Format NVM (80h): Supported LBA-Change 00:09:22.459 I/O Commands 00:09:22.459 ------------ 00:09:22.459 Flush (00h): Supported LBA-Change 00:09:22.459 Write (01h): Supported LBA-Change 00:09:22.459 Read (02h): Supported 00:09:22.459 Compare (05h): Supported 00:09:22.459 Write Zeroes (08h): Supported LBA-Change 00:09:22.459 Dataset Management (09h): Supported LBA-Change 00:09:22.459 Unknown (0Ch): Supported 00:09:22.459 Unknown (12h): Supported 00:09:22.459 Copy (19h): Supported LBA-Change 00:09:22.459 Unknown (1Dh): Supported LBA-Change 00:09:22.459 00:09:22.459 Error Log 00:09:22.459 ========= 00:09:22.459 00:09:22.459 Arbitration 00:09:22.459 =========== 00:09:22.459 Arbitration Burst: no limit 00:09:22.459 00:09:22.459 Power Management 00:09:22.459 ================ 00:09:22.459 Number of Power States: 1 00:09:22.459 Current Power State: Power State #0 00:09:22.459 Power State #0: 00:09:22.459 Max Power: 25.00 W 00:09:22.459 Non-Operational State: Operational 00:09:22.459 Entry Latency: 16 microseconds 00:09:22.459 Exit Latency: 4 microseconds 00:09:22.459 Relative Read Throughput: 0 00:09:22.459 Relative Read Latency: 0 00:09:22.459 Relative Write Throughput: 0 00:09:22.459 Relative Write Latency: 0 00:09:22.459 Idle Power: Not Reported 00:09:22.459 Active Power: Not Reported 00:09:22.459 Non-Operational Permissive Mode: Not Supported 00:09:22.459 00:09:22.459 Health Information 00:09:22.459 ================== 00:09:22.459 Critical Warnings: 00:09:22.459 Available Spare Space: OK 00:09:22.459 Temperature: OK 00:09:22.459 Device Reliability: OK 00:09:22.459 Read Only: No 00:09:22.459 Volatile Memory Backup: OK 00:09:22.459 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.459 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.459 Available Spare: 0% 00:09:22.459 Available Spare Threshold: 0% 00:09:22.459 Life Percentage Used: 0% 00:09:22.459 Data Units Read: 789 00:09:22.459 Data Units Written: 681 00:09:22.459 Host Read Commands: 39407 00:09:22.459 Host Write Commands: 38445 00:09:22.459 Controller Busy Time: 0 minutes 00:09:22.459 Power Cycles: 0 00:09:22.459 Power On Hours: 0 hours 00:09:22.459 Unsafe Shutdowns: 0 00:09:22.459 Unrecoverable Media Errors: 0 00:09:22.459 Lifetime Error Log Entries: 0 00:09:22.459 Warning Temperature Time: 0 minutes 00:09:22.459 Critical Temperature Time: 0 minutes 00:09:22.459 00:09:22.459 Number of Queues 00:09:22.459 ================ 00:09:22.459 Number of I/O Submission Queues: 64 00:09:22.459 Number of I/O Completion Queues: 64 00:09:22.459 00:09:22.459 ZNS Specific Controller Data 00:09:22.459 ============================ 00:09:22.459 Zone Append Size Limit: 0 00:09:22.459 00:09:22.459 00:09:22.459 Active Namespaces 00:09:22.459 ================= 00:09:22.459 Namespace ID:1 00:09:22.459 Error Recovery Timeout: Unlimited 00:09:22.459 Command Set Identifier: NVM (00h) 00:09:22.459 Deallocate: Supported 00:09:22.459 Deallocated/Unwritten Error: Supported 00:09:22.459 Deallocated Read Value: All 0x00 00:09:22.459 Deallocate in Write Zeroes: Not Supported 00:09:22.459 Deallocated Guard Field: 0xFFFF 00:09:22.459 Flush: Supported 00:09:22.459 Reservation: Not Supported 00:09:22.459 Metadata Transferred as: Separate Metadata Buffer 00:09:22.459 Namespace Sharing Capabilities: Private 00:09:22.459 Size (in LBAs): 1548666 (5GiB) 00:09:22.459 Capacity (in LBAs): 1548666 (5GiB) 00:09:22.459 Utilization (in LBAs): 1548666 (5GiB) 00:09:22.459 Thin Provisioning: Not Supported 00:09:22.459 Per-NS Atomic Units: No 00:09:22.459 Maximum Single Source Range Length: 128 00:09:22.459 Maximum Copy Length: 128 00:09:22.460 Maximum Source Range Count: 128 00:09:22.460 NGUID/EUI64 Never Reused: No 00:09:22.460 Namespace Write Protected: No 00:09:22.460 Number of LBA Formats: 8 00:09:22.460 Current LBA Format: LBA Format #07 00:09:22.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.460 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.460 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.460 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.460 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.460 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.460 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.460 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.460 00:09:22.460 NVM Specific Namespace Data 00:09:22.460 =========================== 00:09:22.460 Logical Block Storage Tag Mask: 0 00:09:22.460 Protection Information Capabilities: 00:09:22.460 16b Guard Protection Information Storage Tag Support: No 00:09:22.460 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.460 Storage Tag Check Read Support: No 00:09:22.460 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.460 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.460 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:22.719 ===================================================== 00:09:22.719 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.719 ===================================================== 00:09:22.719 Controller Capabilities/Features 00:09:22.719 ================================ 00:09:22.719 Vendor ID: 1b36 00:09:22.719 Subsystem Vendor ID: 1af4 00:09:22.719 Serial Number: 12341 00:09:22.719 Model Number: QEMU NVMe Ctrl 00:09:22.719 Firmware Version: 8.0.0 00:09:22.719 Recommended Arb Burst: 6 00:09:22.719 IEEE OUI Identifier: 00 54 52 00:09:22.719 Multi-path I/O 00:09:22.719 May have multiple subsystem ports: No 00:09:22.719 May have multiple controllers: No 00:09:22.719 Associated with SR-IOV VF: No 00:09:22.719 Max Data Transfer Size: 524288 00:09:22.719 Max Number of Namespaces: 256 00:09:22.719 Max Number of I/O Queues: 64 00:09:22.719 NVMe Specification Version (VS): 1.4 00:09:22.719 NVMe Specification Version (Identify): 1.4 00:09:22.719 Maximum Queue Entries: 2048 00:09:22.719 Contiguous Queues Required: Yes 00:09:22.719 Arbitration Mechanisms Supported 00:09:22.719 Weighted Round Robin: Not Supported 00:09:22.719 Vendor Specific: Not Supported 00:09:22.719 Reset Timeout: 7500 ms 00:09:22.719 Doorbell Stride: 4 bytes 00:09:22.719 NVM Subsystem Reset: Not Supported 00:09:22.719 Command Sets Supported 00:09:22.719 NVM Command Set: Supported 00:09:22.719 Boot Partition: Not Supported 00:09:22.719 Memory Page Size Minimum: 4096 bytes 00:09:22.719 Memory Page Size Maximum: 65536 bytes 00:09:22.719 Persistent Memory Region: Not Supported 00:09:22.719 Optional Asynchronous Events Supported 00:09:22.719 Namespace Attribute Notices: Supported 00:09:22.719 Firmware Activation Notices: Not Supported 00:09:22.719 ANA Change Notices: Not Supported 00:09:22.719 PLE Aggregate Log Change Notices: Not Supported 00:09:22.719 LBA Status Info Alert Notices: Not Supported 00:09:22.719 EGE Aggregate Log Change Notices: Not Supported 00:09:22.719 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.719 Zone Descriptor Change Notices: Not Supported 00:09:22.719 Discovery Log Change Notices: Not Supported 00:09:22.719 Controller Attributes 00:09:22.719 128-bit Host Identifier: Not Supported 00:09:22.719 Non-Operational Permissive Mode: Not Supported 00:09:22.719 NVM Sets: Not Supported 00:09:22.719 Read Recovery Levels: Not Supported 00:09:22.719 Endurance Groups: Not Supported 00:09:22.719 Predictable Latency Mode: Not Supported 00:09:22.719 Traffic Based Keep ALive: Not Supported 00:09:22.719 Namespace Granularity: Not Supported 00:09:22.719 SQ Associations: Not Supported 00:09:22.719 UUID List: Not Supported 00:09:22.719 Multi-Domain Subsystem: Not Supported 00:09:22.719 Fixed Capacity Management: Not Supported 00:09:22.719 Variable Capacity Management: Not Supported 00:09:22.719 Delete Endurance Group: Not Supported 00:09:22.719 Delete NVM Set: Not Supported 00:09:22.719 Extended LBA Formats Supported: Supported 00:09:22.719 Flexible Data Placement Supported: Not Supported 00:09:22.719 00:09:22.719 Controller Memory Buffer Support 00:09:22.719 ================================ 00:09:22.720 Supported: No 00:09:22.720 00:09:22.720 Persistent Memory Region Support 00:09:22.720 ================================ 00:09:22.720 Supported: No 00:09:22.720 00:09:22.720 Admin Command Set Attributes 00:09:22.720 ============================ 00:09:22.720 Security Send/Receive: Not Supported 00:09:22.720 Format NVM: Supported 00:09:22.720 Firmware Activate/Download: Not Supported 00:09:22.720 Namespace Management: Supported 00:09:22.720 Device Self-Test: Not Supported 00:09:22.720 Directives: Supported 00:09:22.720 NVMe-MI: Not Supported 00:09:22.720 Virtualization Management: Not Supported 00:09:22.720 Doorbell Buffer Config: Supported 00:09:22.720 Get LBA Status Capability: Not Supported 00:09:22.720 Command & Feature Lockdown Capability: Not Supported 00:09:22.720 Abort Command Limit: 4 00:09:22.720 Async Event Request Limit: 4 00:09:22.720 Number of Firmware Slots: N/A 00:09:22.720 Firmware Slot 1 Read-Only: N/A 00:09:22.720 Firmware Activation Without Reset: N/A 00:09:22.720 Multiple Update Detection Support: N/A 00:09:22.720 Firmware Update Granularity: No Information Provided 00:09:22.720 Per-Namespace SMART Log: Yes 00:09:22.720 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.720 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:22.720 Command Effects Log Page: Supported 00:09:22.720 Get Log Page Extended Data: Supported 00:09:22.720 Telemetry Log Pages: Not Supported 00:09:22.720 Persistent Event Log Pages: Not Supported 00:09:22.720 Supported Log Pages Log Page: May Support 00:09:22.720 Commands Supported & Effects Log Page: Not Supported 00:09:22.720 Feature Identifiers & Effects Log Page:May Support 00:09:22.720 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.720 Data Area 4 for Telemetry Log: Not Supported 00:09:22.720 Error Log Page Entries Supported: 1 00:09:22.720 Keep Alive: Not Supported 00:09:22.720 00:09:22.720 NVM Command Set Attributes 00:09:22.720 ========================== 00:09:22.720 Submission Queue Entry Size 00:09:22.720 Max: 64 00:09:22.720 Min: 64 00:09:22.720 Completion Queue Entry Size 00:09:22.720 Max: 16 00:09:22.720 Min: 16 00:09:22.720 Number of Namespaces: 256 00:09:22.720 Compare Command: Supported 00:09:22.720 Write Uncorrectable Command: Not Supported 00:09:22.720 Dataset Management Command: Supported 00:09:22.720 Write Zeroes Command: Supported 00:09:22.720 Set Features Save Field: Supported 00:09:22.720 Reservations: Not Supported 00:09:22.720 Timestamp: Supported 00:09:22.720 Copy: Supported 00:09:22.720 Volatile Write Cache: Present 00:09:22.720 Atomic Write Unit (Normal): 1 00:09:22.720 Atomic Write Unit (PFail): 1 00:09:22.720 Atomic Compare & Write Unit: 1 00:09:22.720 Fused Compare & Write: Not Supported 00:09:22.720 Scatter-Gather List 00:09:22.720 SGL Command Set: Supported 00:09:22.720 SGL Keyed: Not Supported 00:09:22.720 SGL Bit Bucket Descriptor: Not Supported 00:09:22.720 SGL Metadata Pointer: Not Supported 00:09:22.720 Oversized SGL: Not Supported 00:09:22.720 SGL Metadata Address: Not Supported 00:09:22.720 SGL Offset: Not Supported 00:09:22.720 Transport SGL Data Block: Not Supported 00:09:22.720 Replay Protected Memory Block: Not Supported 00:09:22.720 00:09:22.720 Firmware Slot Information 00:09:22.720 ========================= 00:09:22.720 Active slot: 1 00:09:22.720 Slot 1 Firmware Revision: 1.0 00:09:22.720 00:09:22.720 00:09:22.720 Commands Supported and Effects 00:09:22.720 ============================== 00:09:22.720 Admin Commands 00:09:22.720 -------------- 00:09:22.720 Delete I/O Submission Queue (00h): Supported 00:09:22.720 Create I/O Submission Queue (01h): Supported 00:09:22.720 Get Log Page (02h): Supported 00:09:22.720 Delete I/O Completion Queue (04h): Supported 00:09:22.720 Create I/O Completion Queue (05h): Supported 00:09:22.720 Identify (06h): Supported 00:09:22.720 Abort (08h): Supported 00:09:22.720 Set Features (09h): Supported 00:09:22.720 Get Features (0Ah): Supported 00:09:22.720 Asynchronous Event Request (0Ch): Supported 00:09:22.720 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.720 Directive Send (19h): Supported 00:09:22.720 Directive Receive (1Ah): Supported 00:09:22.720 Virtualization Management (1Ch): Supported 00:09:22.720 Doorbell Buffer Config (7Ch): Supported 00:09:22.720 Format NVM (80h): Supported LBA-Change 00:09:22.720 I/O Commands 00:09:22.720 ------------ 00:09:22.720 Flush (00h): Supported LBA-Change 00:09:22.720 Write (01h): Supported LBA-Change 00:09:22.720 Read (02h): Supported 00:09:22.720 Compare (05h): Supported 00:09:22.720 Write Zeroes (08h): Supported LBA-Change 00:09:22.720 Dataset Management (09h): Supported LBA-Change 00:09:22.720 Unknown (0Ch): Supported 00:09:22.720 Unknown (12h): Supported 00:09:22.720 Copy (19h): Supported LBA-Change 00:09:22.720 Unknown (1Dh): Supported LBA-Change 00:09:22.720 00:09:22.720 Error Log 00:09:22.720 ========= 00:09:22.720 00:09:22.720 Arbitration 00:09:22.720 =========== 00:09:22.720 Arbitration Burst: no limit 00:09:22.720 00:09:22.720 Power Management 00:09:22.720 ================ 00:09:22.720 Number of Power States: 1 00:09:22.720 Current Power State: Power State #0 00:09:22.720 Power State #0: 00:09:22.720 Max Power: 25.00 W 00:09:22.720 Non-Operational State: Operational 00:09:22.720 Entry Latency: 16 microseconds 00:09:22.720 Exit Latency: 4 microseconds 00:09:22.720 Relative Read Throughput: 0 00:09:22.720 Relative Read Latency: 0 00:09:22.720 Relative Write Throughput: 0 00:09:22.720 Relative Write Latency: 0 00:09:22.720 Idle Power: Not Reported 00:09:22.720 Active Power: Not Reported 00:09:22.720 Non-Operational Permissive Mode: Not Supported 00:09:22.720 00:09:22.720 Health Information 00:09:22.720 ================== 00:09:22.720 Critical Warnings: 00:09:22.720 Available Spare Space: OK 00:09:22.720 Temperature: OK 00:09:22.720 Device Reliability: OK 00:09:22.720 Read Only: No 00:09:22.720 Volatile Memory Backup: OK 00:09:22.720 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.720 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.720 Available Spare: 0% 00:09:22.720 Available Spare Threshold: 0% 00:09:22.720 Life Percentage Used: 0% 00:09:22.720 Data Units Read: 1286 00:09:22.720 Data Units Written: 1064 00:09:22.720 Host Read Commands: 59446 00:09:22.720 Host Write Commands: 56436 00:09:22.720 Controller Busy Time: 0 minutes 00:09:22.720 Power Cycles: 0 00:09:22.720 Power On Hours: 0 hours 00:09:22.720 Unsafe Shutdowns: 0 00:09:22.720 Unrecoverable Media Errors: 0 00:09:22.720 Lifetime Error Log Entries: 0 00:09:22.720 Warning Temperature Time: 0 minutes 00:09:22.720 Critical Temperature Time: 0 minutes 00:09:22.720 00:09:22.720 Number of Queues 00:09:22.720 ================ 00:09:22.720 Number of I/O Submission Queues: 64 00:09:22.720 Number of I/O Completion Queues: 64 00:09:22.720 00:09:22.720 ZNS Specific Controller Data 00:09:22.720 ============================ 00:09:22.720 Zone Append Size Limit: 0 00:09:22.720 00:09:22.720 00:09:22.720 Active Namespaces 00:09:22.720 ================= 00:09:22.720 Namespace ID:1 00:09:22.720 Error Recovery Timeout: Unlimited 00:09:22.720 Command Set Identifier: NVM (00h) 00:09:22.720 Deallocate: Supported 00:09:22.720 Deallocated/Unwritten Error: Supported 00:09:22.720 Deallocated Read Value: All 0x00 00:09:22.720 Deallocate in Write Zeroes: Not Supported 00:09:22.720 Deallocated Guard Field: 0xFFFF 00:09:22.720 Flush: Supported 00:09:22.720 Reservation: Not Supported 00:09:22.720 Namespace Sharing Capabilities: Private 00:09:22.720 Size (in LBAs): 1310720 (5GiB) 00:09:22.720 Capacity (in LBAs): 1310720 (5GiB) 00:09:22.720 Utilization (in LBAs): 1310720 (5GiB) 00:09:22.720 Thin Provisioning: Not Supported 00:09:22.720 Per-NS Atomic Units: No 00:09:22.720 Maximum Single Source Range Length: 128 00:09:22.720 Maximum Copy Length: 128 00:09:22.720 Maximum Source Range Count: 128 00:09:22.720 NGUID/EUI64 Never Reused: No 00:09:22.720 Namespace Write Protected: No 00:09:22.720 Number of LBA Formats: 8 00:09:22.720 Current LBA Format: LBA Format #04 00:09:22.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.721 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.721 00:09:22.721 NVM Specific Namespace Data 00:09:22.721 =========================== 00:09:22.721 Logical Block Storage Tag Mask: 0 00:09:22.721 Protection Information Capabilities: 00:09:22.721 16b Guard Protection Information Storage Tag Support: No 00:09:22.721 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.721 Storage Tag Check Read Support: No 00:09:22.721 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.721 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.721 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:22.980 ===================================================== 00:09:22.980 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.980 ===================================================== 00:09:22.980 Controller Capabilities/Features 00:09:22.980 ================================ 00:09:22.980 Vendor ID: 1b36 00:09:22.980 Subsystem Vendor ID: 1af4 00:09:22.980 Serial Number: 12342 00:09:22.980 Model Number: QEMU NVMe Ctrl 00:09:22.980 Firmware Version: 8.0.0 00:09:22.980 Recommended Arb Burst: 6 00:09:22.980 IEEE OUI Identifier: 00 54 52 00:09:22.980 Multi-path I/O 00:09:22.980 May have multiple subsystem ports: No 00:09:22.980 May have multiple controllers: No 00:09:22.980 Associated with SR-IOV VF: No 00:09:22.980 Max Data Transfer Size: 524288 00:09:22.980 Max Number of Namespaces: 256 00:09:22.980 Max Number of I/O Queues: 64 00:09:22.980 NVMe Specification Version (VS): 1.4 00:09:22.980 NVMe Specification Version (Identify): 1.4 00:09:22.980 Maximum Queue Entries: 2048 00:09:22.980 Contiguous Queues Required: Yes 00:09:22.980 Arbitration Mechanisms Supported 00:09:22.980 Weighted Round Robin: Not Supported 00:09:22.980 Vendor Specific: Not Supported 00:09:22.980 Reset Timeout: 7500 ms 00:09:22.980 Doorbell Stride: 4 bytes 00:09:22.980 NVM Subsystem Reset: Not Supported 00:09:22.980 Command Sets Supported 00:09:22.980 NVM Command Set: Supported 00:09:22.980 Boot Partition: Not Supported 00:09:22.980 Memory Page Size Minimum: 4096 bytes 00:09:22.980 Memory Page Size Maximum: 65536 bytes 00:09:22.980 Persistent Memory Region: Not Supported 00:09:22.980 Optional Asynchronous Events Supported 00:09:22.980 Namespace Attribute Notices: Supported 00:09:22.980 Firmware Activation Notices: Not Supported 00:09:22.980 ANA Change Notices: Not Supported 00:09:22.980 PLE Aggregate Log Change Notices: Not Supported 00:09:22.980 LBA Status Info Alert Notices: Not Supported 00:09:22.980 EGE Aggregate Log Change Notices: Not Supported 00:09:22.980 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.980 Zone Descriptor Change Notices: Not Supported 00:09:22.980 Discovery Log Change Notices: Not Supported 00:09:22.980 Controller Attributes 00:09:22.980 128-bit Host Identifier: Not Supported 00:09:22.980 Non-Operational Permissive Mode: Not Supported 00:09:22.980 NVM Sets: Not Supported 00:09:22.980 Read Recovery Levels: Not Supported 00:09:22.980 Endurance Groups: Not Supported 00:09:22.980 Predictable Latency Mode: Not Supported 00:09:22.980 Traffic Based Keep ALive: Not Supported 00:09:22.980 Namespace Granularity: Not Supported 00:09:22.980 SQ Associations: Not Supported 00:09:22.980 UUID List: Not Supported 00:09:22.980 Multi-Domain Subsystem: Not Supported 00:09:22.980 Fixed Capacity Management: Not Supported 00:09:22.980 Variable Capacity Management: Not Supported 00:09:22.980 Delete Endurance Group: Not Supported 00:09:22.980 Delete NVM Set: Not Supported 00:09:22.980 Extended LBA Formats Supported: Supported 00:09:22.980 Flexible Data Placement Supported: Not Supported 00:09:22.980 00:09:22.980 Controller Memory Buffer Support 00:09:22.980 ================================ 00:09:22.980 Supported: No 00:09:22.980 00:09:22.980 Persistent Memory Region Support 00:09:22.980 ================================ 00:09:22.980 Supported: No 00:09:22.980 00:09:22.980 Admin Command Set Attributes 00:09:22.980 ============================ 00:09:22.980 Security Send/Receive: Not Supported 00:09:22.980 Format NVM: Supported 00:09:22.980 Firmware Activate/Download: Not Supported 00:09:22.980 Namespace Management: Supported 00:09:22.980 Device Self-Test: Not Supported 00:09:22.980 Directives: Supported 00:09:22.980 NVMe-MI: Not Supported 00:09:22.980 Virtualization Management: Not Supported 00:09:22.980 Doorbell Buffer Config: Supported 00:09:22.980 Get LBA Status Capability: Not Supported 00:09:22.980 Command & Feature Lockdown Capability: Not Supported 00:09:22.980 Abort Command Limit: 4 00:09:22.980 Async Event Request Limit: 4 00:09:22.980 Number of Firmware Slots: N/A 00:09:22.980 Firmware Slot 1 Read-Only: N/A 00:09:22.980 Firmware Activation Without Reset: N/A 00:09:22.980 Multiple Update Detection Support: N/A 00:09:22.980 Firmware Update Granularity: No Information Provided 00:09:22.980 Per-Namespace SMART Log: Yes 00:09:22.980 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.980 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:22.980 Command Effects Log Page: Supported 00:09:22.980 Get Log Page Extended Data: Supported 00:09:22.980 Telemetry Log Pages: Not Supported 00:09:22.980 Persistent Event Log Pages: Not Supported 00:09:22.980 Supported Log Pages Log Page: May Support 00:09:22.980 Commands Supported & Effects Log Page: Not Supported 00:09:22.980 Feature Identifiers & Effects Log Page:May Support 00:09:22.980 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.980 Data Area 4 for Telemetry Log: Not Supported 00:09:22.980 Error Log Page Entries Supported: 1 00:09:22.980 Keep Alive: Not Supported 00:09:22.980 00:09:22.980 NVM Command Set Attributes 00:09:22.980 ========================== 00:09:22.980 Submission Queue Entry Size 00:09:22.980 Max: 64 00:09:22.980 Min: 64 00:09:22.980 Completion Queue Entry Size 00:09:22.980 Max: 16 00:09:22.980 Min: 16 00:09:22.980 Number of Namespaces: 256 00:09:22.980 Compare Command: Supported 00:09:22.980 Write Uncorrectable Command: Not Supported 00:09:22.980 Dataset Management Command: Supported 00:09:22.980 Write Zeroes Command: Supported 00:09:22.980 Set Features Save Field: Supported 00:09:22.980 Reservations: Not Supported 00:09:22.980 Timestamp: Supported 00:09:22.980 Copy: Supported 00:09:22.980 Volatile Write Cache: Present 00:09:22.980 Atomic Write Unit (Normal): 1 00:09:22.980 Atomic Write Unit (PFail): 1 00:09:22.980 Atomic Compare & Write Unit: 1 00:09:22.980 Fused Compare & Write: Not Supported 00:09:22.980 Scatter-Gather List 00:09:22.980 SGL Command Set: Supported 00:09:22.980 SGL Keyed: Not Supported 00:09:22.980 SGL Bit Bucket Descriptor: Not Supported 00:09:22.980 SGL Metadata Pointer: Not Supported 00:09:22.980 Oversized SGL: Not Supported 00:09:22.980 SGL Metadata Address: Not Supported 00:09:22.980 SGL Offset: Not Supported 00:09:22.980 Transport SGL Data Block: Not Supported 00:09:22.980 Replay Protected Memory Block: Not Supported 00:09:22.980 00:09:22.980 Firmware Slot Information 00:09:22.980 ========================= 00:09:22.980 Active slot: 1 00:09:22.980 Slot 1 Firmware Revision: 1.0 00:09:22.980 00:09:22.980 00:09:22.980 Commands Supported and Effects 00:09:22.980 ============================== 00:09:22.980 Admin Commands 00:09:22.980 -------------- 00:09:22.980 Delete I/O Submission Queue (00h): Supported 00:09:22.980 Create I/O Submission Queue (01h): Supported 00:09:22.980 Get Log Page (02h): Supported 00:09:22.980 Delete I/O Completion Queue (04h): Supported 00:09:22.980 Create I/O Completion Queue (05h): Supported 00:09:22.980 Identify (06h): Supported 00:09:22.980 Abort (08h): Supported 00:09:22.980 Set Features (09h): Supported 00:09:22.980 Get Features (0Ah): Supported 00:09:22.980 Asynchronous Event Request (0Ch): Supported 00:09:22.980 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.980 Directive Send (19h): Supported 00:09:22.980 Directive Receive (1Ah): Supported 00:09:22.980 Virtualization Management (1Ch): Supported 00:09:22.980 Doorbell Buffer Config (7Ch): Supported 00:09:22.980 Format NVM (80h): Supported LBA-Change 00:09:22.980 I/O Commands 00:09:22.980 ------------ 00:09:22.980 Flush (00h): Supported LBA-Change 00:09:22.980 Write (01h): Supported LBA-Change 00:09:22.981 Read (02h): Supported 00:09:22.981 Compare (05h): Supported 00:09:22.981 Write Zeroes (08h): Supported LBA-Change 00:09:22.981 Dataset Management (09h): Supported LBA-Change 00:09:22.981 Unknown (0Ch): Supported 00:09:22.981 Unknown (12h): Supported 00:09:22.981 Copy (19h): Supported LBA-Change 00:09:22.981 Unknown (1Dh): Supported LBA-Change 00:09:22.981 00:09:22.981 Error Log 00:09:22.981 ========= 00:09:22.981 00:09:22.981 Arbitration 00:09:22.981 =========== 00:09:22.981 Arbitration Burst: no limit 00:09:22.981 00:09:22.981 Power Management 00:09:22.981 ================ 00:09:22.981 Number of Power States: 1 00:09:22.981 Current Power State: Power State #0 00:09:22.981 Power State #0: 00:09:22.981 Max Power: 25.00 W 00:09:22.981 Non-Operational State: Operational 00:09:22.981 Entry Latency: 16 microseconds 00:09:22.981 Exit Latency: 4 microseconds 00:09:22.981 Relative Read Throughput: 0 00:09:22.981 Relative Read Latency: 0 00:09:22.981 Relative Write Throughput: 0 00:09:22.981 Relative Write Latency: 0 00:09:22.981 Idle Power: Not Reported 00:09:22.981 Active Power: Not Reported 00:09:22.981 Non-Operational Permissive Mode: Not Supported 00:09:22.981 00:09:22.981 Health Information 00:09:22.981 ================== 00:09:22.981 Critical Warnings: 00:09:22.981 Available Spare Space: OK 00:09:22.981 Temperature: OK 00:09:22.981 Device Reliability: OK 00:09:22.981 Read Only: No 00:09:22.981 Volatile Memory Backup: OK 00:09:22.981 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.981 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.981 Available Spare: 0% 00:09:22.981 Available Spare Threshold: 0% 00:09:22.981 Life Percentage Used: 0% 00:09:22.981 Data Units Read: 2571 00:09:22.981 Data Units Written: 2251 00:09:22.981 Host Read Commands: 121107 00:09:22.981 Host Write Commands: 116877 00:09:22.981 Controller Busy Time: 0 minutes 00:09:22.981 Power Cycles: 0 00:09:22.981 Power On Hours: 0 hours 00:09:22.981 Unsafe Shutdowns: 0 00:09:22.981 Unrecoverable Media Errors: 0 00:09:22.981 Lifetime Error Log Entries: 0 00:09:22.981 Warning Temperature Time: 0 minutes 00:09:22.981 Critical Temperature Time: 0 minutes 00:09:22.981 00:09:22.981 Number of Queues 00:09:22.981 ================ 00:09:22.981 Number of I/O Submission Queues: 64 00:09:22.981 Number of I/O Completion Queues: 64 00:09:22.981 00:09:22.981 ZNS Specific Controller Data 00:09:22.981 ============================ 00:09:22.981 Zone Append Size Limit: 0 00:09:22.981 00:09:22.981 00:09:22.981 Active Namespaces 00:09:22.981 ================= 00:09:22.981 Namespace ID:1 00:09:22.981 Error Recovery Timeout: Unlimited 00:09:22.981 Command Set Identifier: NVM (00h) 00:09:22.981 Deallocate: Supported 00:09:22.981 Deallocated/Unwritten Error: Supported 00:09:22.981 Deallocated Read Value: All 0x00 00:09:22.981 Deallocate in Write Zeroes: Not Supported 00:09:22.981 Deallocated Guard Field: 0xFFFF 00:09:22.981 Flush: Supported 00:09:22.981 Reservation: Not Supported 00:09:22.981 Namespace Sharing Capabilities: Private 00:09:22.981 Size (in LBAs): 1048576 (4GiB) 00:09:22.981 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.981 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.981 Thin Provisioning: Not Supported 00:09:22.981 Per-NS Atomic Units: No 00:09:22.981 Maximum Single Source Range Length: 128 00:09:22.981 Maximum Copy Length: 128 00:09:22.981 Maximum Source Range Count: 128 00:09:22.981 NGUID/EUI64 Never Reused: No 00:09:22.981 Namespace Write Protected: No 00:09:22.981 Number of LBA Formats: 8 00:09:22.981 Current LBA Format: LBA Format #04 00:09:22.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.981 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.981 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.981 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.981 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.981 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.981 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.981 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.981 00:09:22.981 NVM Specific Namespace Data 00:09:22.981 =========================== 00:09:22.981 Logical Block Storage Tag Mask: 0 00:09:22.981 Protection Information Capabilities: 00:09:22.981 16b Guard Protection Information Storage Tag Support: No 00:09:22.981 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.981 Storage Tag Check Read Support: No 00:09:22.981 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Namespace ID:2 00:09:22.981 Error Recovery Timeout: Unlimited 00:09:22.981 Command Set Identifier: NVM (00h) 00:09:22.981 Deallocate: Supported 00:09:22.981 Deallocated/Unwritten Error: Supported 00:09:22.981 Deallocated Read Value: All 0x00 00:09:22.981 Deallocate in Write Zeroes: Not Supported 00:09:22.981 Deallocated Guard Field: 0xFFFF 00:09:22.981 Flush: Supported 00:09:22.981 Reservation: Not Supported 00:09:22.981 Namespace Sharing Capabilities: Private 00:09:22.981 Size (in LBAs): 1048576 (4GiB) 00:09:22.981 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.981 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.981 Thin Provisioning: Not Supported 00:09:22.981 Per-NS Atomic Units: No 00:09:22.981 Maximum Single Source Range Length: 128 00:09:22.981 Maximum Copy Length: 128 00:09:22.981 Maximum Source Range Count: 128 00:09:22.981 NGUID/EUI64 Never Reused: No 00:09:22.981 Namespace Write Protected: No 00:09:22.981 Number of LBA Formats: 8 00:09:22.981 Current LBA Format: LBA Format #04 00:09:22.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.981 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.981 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.981 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.981 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.981 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.981 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.981 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.981 00:09:22.981 NVM Specific Namespace Data 00:09:22.981 =========================== 00:09:22.981 Logical Block Storage Tag Mask: 0 00:09:22.981 Protection Information Capabilities: 00:09:22.981 16b Guard Protection Information Storage Tag Support: No 00:09:22.981 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.981 Storage Tag Check Read Support: No 00:09:22.981 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.981 Namespace ID:3 00:09:22.981 Error Recovery Timeout: Unlimited 00:09:22.981 Command Set Identifier: NVM (00h) 00:09:22.981 Deallocate: Supported 00:09:22.981 Deallocated/Unwritten Error: Supported 00:09:22.981 Deallocated Read Value: All 0x00 00:09:22.981 Deallocate in Write Zeroes: Not Supported 00:09:22.981 Deallocated Guard Field: 0xFFFF 00:09:22.981 Flush: Supported 00:09:22.981 Reservation: Not Supported 00:09:22.981 Namespace Sharing Capabilities: Private 00:09:22.981 Size (in LBAs): 1048576 (4GiB) 00:09:22.981 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.981 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.981 Thin Provisioning: Not Supported 00:09:22.981 Per-NS Atomic Units: No 00:09:22.981 Maximum Single Source Range Length: 128 00:09:22.981 Maximum Copy Length: 128 00:09:22.981 Maximum Source Range Count: 128 00:09:22.981 NGUID/EUI64 Never Reused: No 00:09:22.981 Namespace Write Protected: No 00:09:22.981 Number of LBA Formats: 8 00:09:22.982 Current LBA Format: LBA Format #04 00:09:22.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.982 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.982 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.982 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.982 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.982 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.982 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.982 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.982 00:09:22.982 NVM Specific Namespace Data 00:09:22.982 =========================== 00:09:22.982 Logical Block Storage Tag Mask: 0 00:09:22.982 Protection Information Capabilities: 00:09:22.982 16b Guard Protection Information Storage Tag Support: No 00:09:22.982 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.982 Storage Tag Check Read Support: No 00:09:22.982 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.982 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.241 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:23.241 09:35:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:23.241 ===================================================== 00:09:23.241 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.241 ===================================================== 00:09:23.241 Controller Capabilities/Features 00:09:23.241 ================================ 00:09:23.241 Vendor ID: 1b36 00:09:23.241 Subsystem Vendor ID: 1af4 00:09:23.241 Serial Number: 12343 00:09:23.241 Model Number: QEMU NVMe Ctrl 00:09:23.241 Firmware Version: 8.0.0 00:09:23.241 Recommended Arb Burst: 6 00:09:23.241 IEEE OUI Identifier: 00 54 52 00:09:23.241 Multi-path I/O 00:09:23.241 May have multiple subsystem ports: No 00:09:23.241 May have multiple controllers: Yes 00:09:23.241 Associated with SR-IOV VF: No 00:09:23.241 Max Data Transfer Size: 524288 00:09:23.241 Max Number of Namespaces: 256 00:09:23.241 Max Number of I/O Queues: 64 00:09:23.241 NVMe Specification Version (VS): 1.4 00:09:23.241 NVMe Specification Version (Identify): 1.4 00:09:23.241 Maximum Queue Entries: 2048 00:09:23.241 Contiguous Queues Required: Yes 00:09:23.241 Arbitration Mechanisms Supported 00:09:23.241 Weighted Round Robin: Not Supported 00:09:23.241 Vendor Specific: Not Supported 00:09:23.241 Reset Timeout: 7500 ms 00:09:23.241 Doorbell Stride: 4 bytes 00:09:23.241 NVM Subsystem Reset: Not Supported 00:09:23.241 Command Sets Supported 00:09:23.241 NVM Command Set: Supported 00:09:23.241 Boot Partition: Not Supported 00:09:23.241 Memory Page Size Minimum: 4096 bytes 00:09:23.241 Memory Page Size Maximum: 65536 bytes 00:09:23.241 Persistent Memory Region: Not Supported 00:09:23.241 Optional Asynchronous Events Supported 00:09:23.241 Namespace Attribute Notices: Supported 00:09:23.241 Firmware Activation Notices: Not Supported 00:09:23.241 ANA Change Notices: Not Supported 00:09:23.241 PLE Aggregate Log Change Notices: Not Supported 00:09:23.241 LBA Status Info Alert Notices: Not Supported 00:09:23.241 EGE Aggregate Log Change Notices: Not Supported 00:09:23.241 Normal NVM Subsystem Shutdown event: Not Supported 00:09:23.241 Zone Descriptor Change Notices: Not Supported 00:09:23.241 Discovery Log Change Notices: Not Supported 00:09:23.241 Controller Attributes 00:09:23.241 128-bit Host Identifier: Not Supported 00:09:23.241 Non-Operational Permissive Mode: Not Supported 00:09:23.241 NVM Sets: Not Supported 00:09:23.241 Read Recovery Levels: Not Supported 00:09:23.241 Endurance Groups: Supported 00:09:23.241 Predictable Latency Mode: Not Supported 00:09:23.241 Traffic Based Keep ALive: Not Supported 00:09:23.241 Namespace Granularity: Not Supported 00:09:23.241 SQ Associations: Not Supported 00:09:23.241 UUID List: Not Supported 00:09:23.241 Multi-Domain Subsystem: Not Supported 00:09:23.241 Fixed Capacity Management: Not Supported 00:09:23.241 Variable Capacity Management: Not Supported 00:09:23.241 Delete Endurance Group: Not Supported 00:09:23.241 Delete NVM Set: Not Supported 00:09:23.241 Extended LBA Formats Supported: Supported 00:09:23.241 Flexible Data Placement Supported: Supported 00:09:23.241 00:09:23.241 Controller Memory Buffer Support 00:09:23.241 ================================ 00:09:23.241 Supported: No 00:09:23.241 00:09:23.241 Persistent Memory Region Support 00:09:23.241 ================================ 00:09:23.241 Supported: No 00:09:23.241 00:09:23.241 Admin Command Set Attributes 00:09:23.241 ============================ 00:09:23.241 Security Send/Receive: Not Supported 00:09:23.241 Format NVM: Supported 00:09:23.241 Firmware Activate/Download: Not Supported 00:09:23.241 Namespace Management: Supported 00:09:23.241 Device Self-Test: Not Supported 00:09:23.241 Directives: Supported 00:09:23.241 NVMe-MI: Not Supported 00:09:23.241 Virtualization Management: Not Supported 00:09:23.241 Doorbell Buffer Config: Supported 00:09:23.241 Get LBA Status Capability: Not Supported 00:09:23.241 Command & Feature Lockdown Capability: Not Supported 00:09:23.241 Abort Command Limit: 4 00:09:23.241 Async Event Request Limit: 4 00:09:23.241 Number of Firmware Slots: N/A 00:09:23.241 Firmware Slot 1 Read-Only: N/A 00:09:23.241 Firmware Activation Without Reset: N/A 00:09:23.241 Multiple Update Detection Support: N/A 00:09:23.241 Firmware Update Granularity: No Information Provided 00:09:23.241 Per-Namespace SMART Log: Yes 00:09:23.241 Asymmetric Namespace Access Log Page: Not Supported 00:09:23.241 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:23.241 Command Effects Log Page: Supported 00:09:23.241 Get Log Page Extended Data: Supported 00:09:23.241 Telemetry Log Pages: Not Supported 00:09:23.241 Persistent Event Log Pages: Not Supported 00:09:23.241 Supported Log Pages Log Page: May Support 00:09:23.241 Commands Supported & Effects Log Page: Not Supported 00:09:23.241 Feature Identifiers & Effects Log Page:May Support 00:09:23.241 NVMe-MI Commands & Effects Log Page: May Support 00:09:23.241 Data Area 4 for Telemetry Log: Not Supported 00:09:23.241 Error Log Page Entries Supported: 1 00:09:23.241 Keep Alive: Not Supported 00:09:23.241 00:09:23.241 NVM Command Set Attributes 00:09:23.241 ========================== 00:09:23.241 Submission Queue Entry Size 00:09:23.241 Max: 64 00:09:23.242 Min: 64 00:09:23.242 Completion Queue Entry Size 00:09:23.242 Max: 16 00:09:23.242 Min: 16 00:09:23.242 Number of Namespaces: 256 00:09:23.242 Compare Command: Supported 00:09:23.242 Write Uncorrectable Command: Not Supported 00:09:23.242 Dataset Management Command: Supported 00:09:23.242 Write Zeroes Command: Supported 00:09:23.242 Set Features Save Field: Supported 00:09:23.242 Reservations: Not Supported 00:09:23.242 Timestamp: Supported 00:09:23.242 Copy: Supported 00:09:23.242 Volatile Write Cache: Present 00:09:23.242 Atomic Write Unit (Normal): 1 00:09:23.242 Atomic Write Unit (PFail): 1 00:09:23.242 Atomic Compare & Write Unit: 1 00:09:23.242 Fused Compare & Write: Not Supported 00:09:23.242 Scatter-Gather List 00:09:23.242 SGL Command Set: Supported 00:09:23.242 SGL Keyed: Not Supported 00:09:23.242 SGL Bit Bucket Descriptor: Not Supported 00:09:23.242 SGL Metadata Pointer: Not Supported 00:09:23.242 Oversized SGL: Not Supported 00:09:23.242 SGL Metadata Address: Not Supported 00:09:23.242 SGL Offset: Not Supported 00:09:23.242 Transport SGL Data Block: Not Supported 00:09:23.242 Replay Protected Memory Block: Not Supported 00:09:23.242 00:09:23.242 Firmware Slot Information 00:09:23.242 ========================= 00:09:23.242 Active slot: 1 00:09:23.242 Slot 1 Firmware Revision: 1.0 00:09:23.242 00:09:23.242 00:09:23.242 Commands Supported and Effects 00:09:23.242 ============================== 00:09:23.242 Admin Commands 00:09:23.242 -------------- 00:09:23.242 Delete I/O Submission Queue (00h): Supported 00:09:23.242 Create I/O Submission Queue (01h): Supported 00:09:23.242 Get Log Page (02h): Supported 00:09:23.242 Delete I/O Completion Queue (04h): Supported 00:09:23.242 Create I/O Completion Queue (05h): Supported 00:09:23.242 Identify (06h): Supported 00:09:23.242 Abort (08h): Supported 00:09:23.242 Set Features (09h): Supported 00:09:23.242 Get Features (0Ah): Supported 00:09:23.242 Asynchronous Event Request (0Ch): Supported 00:09:23.242 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:23.242 Directive Send (19h): Supported 00:09:23.242 Directive Receive (1Ah): Supported 00:09:23.242 Virtualization Management (1Ch): Supported 00:09:23.242 Doorbell Buffer Config (7Ch): Supported 00:09:23.242 Format NVM (80h): Supported LBA-Change 00:09:23.242 I/O Commands 00:09:23.242 ------------ 00:09:23.242 Flush (00h): Supported LBA-Change 00:09:23.242 Write (01h): Supported LBA-Change 00:09:23.242 Read (02h): Supported 00:09:23.242 Compare (05h): Supported 00:09:23.242 Write Zeroes (08h): Supported LBA-Change 00:09:23.242 Dataset Management (09h): Supported LBA-Change 00:09:23.242 Unknown (0Ch): Supported 00:09:23.242 Unknown (12h): Supported 00:09:23.242 Copy (19h): Supported LBA-Change 00:09:23.242 Unknown (1Dh): Supported LBA-Change 00:09:23.242 00:09:23.242 Error Log 00:09:23.242 ========= 00:09:23.242 00:09:23.242 Arbitration 00:09:23.242 =========== 00:09:23.242 Arbitration Burst: no limit 00:09:23.242 00:09:23.242 Power Management 00:09:23.242 ================ 00:09:23.242 Number of Power States: 1 00:09:23.242 Current Power State: Power State #0 00:09:23.242 Power State #0: 00:09:23.242 Max Power: 25.00 W 00:09:23.242 Non-Operational State: Operational 00:09:23.242 Entry Latency: 16 microseconds 00:09:23.242 Exit Latency: 4 microseconds 00:09:23.242 Relative Read Throughput: 0 00:09:23.242 Relative Read Latency: 0 00:09:23.242 Relative Write Throughput: 0 00:09:23.242 Relative Write Latency: 0 00:09:23.242 Idle Power: Not Reported 00:09:23.242 Active Power: Not Reported 00:09:23.242 Non-Operational Permissive Mode: Not Supported 00:09:23.242 00:09:23.242 Health Information 00:09:23.242 ================== 00:09:23.242 Critical Warnings: 00:09:23.242 Available Spare Space: OK 00:09:23.242 Temperature: OK 00:09:23.242 Device Reliability: OK 00:09:23.242 Read Only: No 00:09:23.242 Volatile Memory Backup: OK 00:09:23.242 Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.242 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:23.242 Available Spare: 0% 00:09:23.242 Available Spare Threshold: 0% 00:09:23.242 Life Percentage Used: 0% 00:09:23.242 Data Units Read: 920 00:09:23.242 Data Units Written: 813 00:09:23.242 Host Read Commands: 40879 00:09:23.242 Host Write Commands: 39469 00:09:23.242 Controller Busy Time: 0 minutes 00:09:23.242 Power Cycles: 0 00:09:23.242 Power On Hours: 0 hours 00:09:23.242 Unsafe Shutdowns: 0 00:09:23.242 Unrecoverable Media Errors: 0 00:09:23.242 Lifetime Error Log Entries: 0 00:09:23.242 Warning Temperature Time: 0 minutes 00:09:23.242 Critical Temperature Time: 0 minutes 00:09:23.242 00:09:23.242 Number of Queues 00:09:23.242 ================ 00:09:23.242 Number of I/O Submission Queues: 64 00:09:23.242 Number of I/O Completion Queues: 64 00:09:23.242 00:09:23.242 ZNS Specific Controller Data 00:09:23.242 ============================ 00:09:23.242 Zone Append Size Limit: 0 00:09:23.242 00:09:23.242 00:09:23.242 Active Namespaces 00:09:23.242 ================= 00:09:23.242 Namespace ID:1 00:09:23.242 Error Recovery Timeout: Unlimited 00:09:23.242 Command Set Identifier: NVM (00h) 00:09:23.242 Deallocate: Supported 00:09:23.242 Deallocated/Unwritten Error: Supported 00:09:23.242 Deallocated Read Value: All 0x00 00:09:23.242 Deallocate in Write Zeroes: Not Supported 00:09:23.242 Deallocated Guard Field: 0xFFFF 00:09:23.242 Flush: Supported 00:09:23.242 Reservation: Not Supported 00:09:23.242 Namespace Sharing Capabilities: Multiple Controllers 00:09:23.242 Size (in LBAs): 262144 (1GiB) 00:09:23.242 Capacity (in LBAs): 262144 (1GiB) 00:09:23.242 Utilization (in LBAs): 262144 (1GiB) 00:09:23.242 Thin Provisioning: Not Supported 00:09:23.242 Per-NS Atomic Units: No 00:09:23.242 Maximum Single Source Range Length: 128 00:09:23.242 Maximum Copy Length: 128 00:09:23.242 Maximum Source Range Count: 128 00:09:23.242 NGUID/EUI64 Never Reused: No 00:09:23.242 Namespace Write Protected: No 00:09:23.242 Endurance group ID: 1 00:09:23.242 Number of LBA Formats: 8 00:09:23.242 Current LBA Format: LBA Format #04 00:09:23.242 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.242 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.242 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.242 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.242 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.242 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.242 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.242 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.242 00:09:23.242 Get Feature FDP: 00:09:23.242 ================ 00:09:23.242 Enabled: Yes 00:09:23.242 FDP configuration index: 0 00:09:23.242 00:09:23.242 FDP configurations log page 00:09:23.242 =========================== 00:09:23.242 Number of FDP configurations: 1 00:09:23.242 Version: 0 00:09:23.242 Size: 112 00:09:23.242 FDP Configuration Descriptor: 0 00:09:23.242 Descriptor Size: 96 00:09:23.242 Reclaim Group Identifier format: 2 00:09:23.242 FDP Volatile Write Cache: Not Present 00:09:23.242 FDP Configuration: Valid 00:09:23.242 Vendor Specific Size: 0 00:09:23.242 Number of Reclaim Groups: 2 00:09:23.242 Number of Recalim Unit Handles: 8 00:09:23.242 Max Placement Identifiers: 128 00:09:23.242 Number of Namespaces Suppprted: 256 00:09:23.242 Reclaim unit Nominal Size: 6000000 bytes 00:09:23.242 Estimated Reclaim Unit Time Limit: Not Reported 00:09:23.242 RUH Desc #000: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #001: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #002: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #003: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #004: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #005: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #006: RUH Type: Initially Isolated 00:09:23.242 RUH Desc #007: RUH Type: Initially Isolated 00:09:23.242 00:09:23.242 FDP reclaim unit handle usage log page 00:09:23.501 ====================================== 00:09:23.501 Number of Reclaim Unit Handles: 8 00:09:23.501 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:23.501 RUH Usage Desc #001: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #002: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #003: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #004: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #005: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #006: RUH Attributes: Unused 00:09:23.501 RUH Usage Desc #007: RUH Attributes: Unused 00:09:23.501 00:09:23.501 FDP statistics log page 00:09:23.501 ======================= 00:09:23.501 Host bytes with metadata written: 490381312 00:09:23.501 Media bytes with metadata written: 490434560 00:09:23.501 Media bytes erased: 0 00:09:23.501 00:09:23.501 FDP events log page 00:09:23.501 =================== 00:09:23.501 Number of FDP events: 0 00:09:23.501 00:09:23.501 NVM Specific Namespace Data 00:09:23.501 =========================== 00:09:23.501 Logical Block Storage Tag Mask: 0 00:09:23.501 Protection Information Capabilities: 00:09:23.501 16b Guard Protection Information Storage Tag Support: No 00:09:23.501 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.501 Storage Tag Check Read Support: No 00:09:23.501 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.501 00:09:23.501 real 0m1.604s 00:09:23.501 user 0m0.550s 00:09:23.501 sys 0m0.829s 00:09:23.501 09:35:01 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.501 ************************************ 00:09:23.501 END TEST nvme_identify 00:09:23.501 ************************************ 00:09:23.501 09:35:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:23.501 09:35:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:23.501 09:35:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.501 09:35:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.501 09:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.501 ************************************ 00:09:23.501 START TEST nvme_perf 00:09:23.501 ************************************ 00:09:23.501 09:35:01 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:09:23.501 09:35:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:24.887 Initializing NVMe Controllers 00:09:24.887 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.887 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.887 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.887 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.887 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:24.887 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:24.887 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:24.887 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:24.887 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:24.887 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:24.887 Initialization complete. Launching workers. 00:09:24.887 ======================================================== 00:09:24.887 Latency(us) 00:09:24.887 Device Information : IOPS MiB/s Average min max 00:09:24.887 PCIE (0000:00:10.0) NSID 1 from core 0: 14003.90 164.11 9144.52 5958.58 36267.37 00:09:24.887 PCIE (0000:00:11.0) NSID 1 from core 0: 14003.90 164.11 9138.01 5653.29 35880.63 00:09:24.887 PCIE (0000:00:13.0) NSID 1 from core 0: 14003.90 164.11 9130.39 4853.95 36003.42 00:09:24.887 PCIE (0000:00:12.0) NSID 1 from core 0: 14003.90 164.11 9122.49 4525.30 35522.19 00:09:24.887 PCIE (0000:00:12.0) NSID 2 from core 0: 14003.90 164.11 9114.71 4228.50 35135.43 00:09:24.887 PCIE (0000:00:12.0) NSID 3 from core 0: 14067.85 164.86 9065.31 3870.94 29364.48 00:09:24.887 ======================================================== 00:09:24.887 Total : 84087.35 985.40 9119.20 3870.94 36267.37 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8001.182us 00:09:24.887 10.00000% : 8317.018us 00:09:24.887 25.00000% : 8527.576us 00:09:24.887 50.00000% : 8843.412us 00:09:24.887 75.00000% : 9159.248us 00:09:24.887 90.00000% : 9527.724us 00:09:24.887 95.00000% : 9948.839us 00:09:24.887 98.00000% : 11843.855us 00:09:24.887 99.00000% : 18844.890us 00:09:24.887 99.50000% : 29267.483us 00:09:24.887 99.90000% : 36005.320us 00:09:24.887 99.99000% : 36426.435us 00:09:24.887 99.99900% : 36426.435us 00:09:24.887 99.99990% : 36426.435us 00:09:24.887 99.99999% : 36426.435us 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8106.461us 00:09:24.887 10.00000% : 8369.658us 00:09:24.887 25.00000% : 8580.215us 00:09:24.887 50.00000% : 8843.412us 00:09:24.887 75.00000% : 9106.609us 00:09:24.887 90.00000% : 9475.084us 00:09:24.887 95.00000% : 9948.839us 00:09:24.887 98.00000% : 11843.855us 00:09:24.887 99.00000% : 18213.218us 00:09:24.887 99.50000% : 29267.483us 00:09:24.887 99.90000% : 35794.763us 00:09:24.887 99.99000% : 36005.320us 00:09:24.887 99.99900% : 36005.320us 00:09:24.887 99.99990% : 36005.320us 00:09:24.887 99.99999% : 36005.320us 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8053.822us 00:09:24.887 10.00000% : 8369.658us 00:09:24.887 25.00000% : 8580.215us 00:09:24.887 50.00000% : 8843.412us 00:09:24.887 75.00000% : 9106.609us 00:09:24.887 90.00000% : 9475.084us 00:09:24.887 95.00000% : 9948.839us 00:09:24.887 98.00000% : 12054.413us 00:09:24.887 99.00000% : 18318.496us 00:09:24.887 99.50000% : 29688.598us 00:09:24.887 99.90000% : 35794.763us 00:09:24.887 99.99000% : 36005.320us 00:09:24.887 99.99900% : 36005.320us 00:09:24.887 99.99990% : 36005.320us 00:09:24.887 99.99999% : 36005.320us 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8053.822us 00:09:24.887 10.00000% : 8369.658us 00:09:24.887 25.00000% : 8580.215us 00:09:24.887 50.00000% : 8790.773us 00:09:24.887 75.00000% : 9106.609us 00:09:24.887 90.00000% : 9475.084us 00:09:24.887 95.00000% : 9896.199us 00:09:24.887 98.00000% : 12159.692us 00:09:24.887 99.00000% : 18318.496us 00:09:24.887 99.50000% : 29478.040us 00:09:24.887 99.90000% : 35373.648us 00:09:24.887 99.99000% : 35584.206us 00:09:24.887 99.99900% : 35584.206us 00:09:24.887 99.99990% : 35584.206us 00:09:24.887 99.99999% : 35584.206us 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8053.822us 00:09:24.887 10.00000% : 8369.658us 00:09:24.887 25.00000% : 8527.576us 00:09:24.887 50.00000% : 8843.412us 00:09:24.887 75.00000% : 9106.609us 00:09:24.887 90.00000% : 9475.084us 00:09:24.887 95.00000% : 9896.199us 00:09:24.887 98.00000% : 11949.134us 00:09:24.887 99.00000% : 18423.775us 00:09:24.887 99.50000% : 29056.925us 00:09:24.887 99.90000% : 34952.533us 00:09:24.887 99.99000% : 35163.091us 00:09:24.887 99.99900% : 35163.091us 00:09:24.887 99.99990% : 35163.091us 00:09:24.887 99.99999% : 35163.091us 00:09:24.887 00:09:24.887 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:24.887 ================================================================================= 00:09:24.887 1.00000% : 8001.182us 00:09:24.887 10.00000% : 8369.658us 00:09:24.887 25.00000% : 8580.215us 00:09:24.887 50.00000% : 8843.412us 00:09:24.887 75.00000% : 9106.609us 00:09:24.887 90.00000% : 9475.084us 00:09:24.887 95.00000% : 9896.199us 00:09:24.887 98.00000% : 11791.216us 00:09:24.887 99.00000% : 18634.333us 00:09:24.887 99.50000% : 22845.481us 00:09:24.887 99.90000% : 29267.483us 00:09:24.887 99.99000% : 29478.040us 00:09:24.887 99.99900% : 29478.040us 00:09:24.887 99.99990% : 29478.040us 00:09:24.887 99.99999% : 29478.040us 00:09:24.887 00:09:24.887 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:24.887 ============================================================================== 00:09:24.887 Range in us Cumulative IO count 00:09:24.887 5948.247 - 5974.567: 0.0214% ( 3) 00:09:24.887 5974.567 - 6000.887: 0.0285% ( 1) 00:09:24.887 6000.887 - 6027.206: 0.0499% ( 3) 00:09:24.887 6027.206 - 6053.526: 0.0642% ( 2) 00:09:24.887 6053.526 - 6079.846: 0.0785% ( 2) 00:09:24.887 6079.846 - 6106.165: 0.0928% ( 2) 00:09:24.887 6106.165 - 6132.485: 0.0999% ( 1) 00:09:24.887 6132.485 - 6158.805: 0.1213% ( 3) 00:09:24.887 6158.805 - 6185.124: 0.1284% ( 1) 00:09:24.887 6185.124 - 6211.444: 0.1498% ( 3) 00:09:24.887 6211.444 - 6237.764: 0.1641% ( 2) 00:09:24.887 6237.764 - 6264.084: 0.1784% ( 2) 00:09:24.887 6264.084 - 6290.403: 0.1855% ( 1) 00:09:24.887 6290.403 - 6316.723: 0.2140% ( 4) 00:09:24.887 6316.723 - 6343.043: 0.2212% ( 1) 00:09:24.887 6343.043 - 6369.362: 0.2354% ( 2) 00:09:24.887 6369.362 - 6395.682: 0.2497% ( 2) 00:09:24.887 6395.682 - 6422.002: 0.2568% ( 1) 00:09:24.887 6422.002 - 6448.321: 0.2783% ( 3) 00:09:24.887 6448.321 - 6474.641: 0.2925% ( 2) 00:09:24.887 6474.641 - 6500.961: 0.3068% ( 2) 00:09:24.887 6500.961 - 6527.280: 0.3211% ( 2) 00:09:24.887 6527.280 - 6553.600: 0.3353% ( 2) 00:09:24.887 6553.600 - 6579.920: 0.3496% ( 2) 00:09:24.887 6579.920 - 6606.239: 0.3639% ( 2) 00:09:24.887 6606.239 - 6632.559: 0.3710% ( 1) 00:09:24.887 6632.559 - 6658.879: 0.3995% ( 4) 00:09:24.887 6658.879 - 6685.198: 0.4067% ( 1) 00:09:24.887 6685.198 - 6711.518: 0.4281% ( 3) 00:09:24.887 6711.518 - 6737.838: 0.4352% ( 1) 00:09:24.887 6737.838 - 6790.477: 0.4566% ( 3) 00:09:24.887 7843.264 - 7895.904: 0.5066% ( 7) 00:09:24.887 7895.904 - 7948.543: 0.6778% ( 24) 00:09:24.887 7948.543 - 8001.182: 1.1344% ( 64) 00:09:24.887 8001.182 - 8053.822: 1.9834% ( 119) 00:09:24.887 8053.822 - 8106.461: 3.1036% ( 157) 00:09:24.887 8106.461 - 8159.100: 4.7517% ( 231) 00:09:24.887 8159.100 - 8211.740: 6.9278% ( 305) 00:09:24.887 8211.740 - 8264.379: 9.3465% ( 339) 00:09:24.887 8264.379 - 8317.018: 12.1433% ( 392) 00:09:24.887 8317.018 - 8369.658: 15.4038% ( 457) 00:09:24.887 8369.658 - 8422.297: 19.1210% ( 521) 00:09:24.887 8422.297 - 8474.937: 22.7668% ( 511) 00:09:24.887 8474.937 - 8527.576: 26.6053% ( 538) 00:09:24.887 8527.576 - 8580.215: 30.7934% ( 587) 00:09:24.887 8580.215 - 8632.855: 35.1099% ( 605) 00:09:24.887 8632.855 - 8685.494: 39.5263% ( 619) 00:09:24.887 8685.494 - 8738.133: 43.9712% ( 623) 00:09:24.887 8738.133 - 8790.773: 48.5945% ( 648) 00:09:24.887 8790.773 - 8843.412: 53.1179% ( 634) 00:09:24.887 8843.412 - 8896.051: 57.8196% ( 659) 00:09:24.887 8896.051 - 8948.691: 62.4215% ( 645) 00:09:24.887 8948.691 - 9001.330: 66.6952% ( 599) 00:09:24.887 9001.330 - 9053.969: 70.7192% ( 564) 00:09:24.887 9053.969 - 9106.609: 74.3222% ( 505) 00:09:24.887 9106.609 - 9159.248: 77.5471% ( 452) 00:09:24.887 9159.248 - 9211.888: 80.3867% ( 398) 00:09:24.887 9211.888 - 9264.527: 82.9053% ( 353) 00:09:24.887 9264.527 - 9317.166: 84.9529% ( 287) 00:09:24.887 9317.166 - 9369.806: 86.8365% ( 264) 00:09:24.887 9369.806 - 9422.445: 88.3205% ( 208) 00:09:24.887 9422.445 - 9475.084: 89.6618% ( 188) 00:09:24.888 9475.084 - 9527.724: 90.6963% ( 145) 00:09:24.888 9527.724 - 9580.363: 91.6667% ( 136) 00:09:24.888 9580.363 - 9633.002: 92.4658% ( 112) 00:09:24.888 9633.002 - 9685.642: 93.1864% ( 101) 00:09:24.888 9685.642 - 9738.281: 93.7286% ( 76) 00:09:24.888 9738.281 - 9790.920: 94.2423% ( 72) 00:09:24.888 9790.920 - 9843.560: 94.5848% ( 48) 00:09:24.888 9843.560 - 9896.199: 94.9272% ( 48) 00:09:24.888 9896.199 - 9948.839: 95.1912% ( 37) 00:09:24.888 9948.839 - 10001.478: 95.3767% ( 26) 00:09:24.888 10001.478 - 10054.117: 95.5836% ( 29) 00:09:24.888 10054.117 - 10106.757: 95.7834% ( 28) 00:09:24.888 10106.757 - 10159.396: 95.9047% ( 17) 00:09:24.888 10159.396 - 10212.035: 96.0616% ( 22) 00:09:24.888 10212.035 - 10264.675: 96.2115% ( 21) 00:09:24.888 10264.675 - 10317.314: 96.3042% ( 13) 00:09:24.888 10317.314 - 10369.953: 96.3613% ( 8) 00:09:24.888 10369.953 - 10422.593: 96.4255% ( 9) 00:09:24.888 10422.593 - 10475.232: 96.5040% ( 11) 00:09:24.888 10475.232 - 10527.871: 96.5254% ( 3) 00:09:24.888 10527.871 - 10580.511: 96.5468% ( 3) 00:09:24.888 10580.511 - 10633.150: 96.5753% ( 4) 00:09:24.888 10633.150 - 10685.790: 96.5967% ( 3) 00:09:24.888 10685.790 - 10738.429: 96.6110% ( 2) 00:09:24.888 10738.429 - 10791.068: 96.6610% ( 7) 00:09:24.888 10791.068 - 10843.708: 96.6895% ( 4) 00:09:24.888 10843.708 - 10896.347: 96.7608% ( 10) 00:09:24.888 10896.347 - 10948.986: 96.8179% ( 8) 00:09:24.888 10948.986 - 11001.626: 96.8893% ( 10) 00:09:24.888 11001.626 - 11054.265: 96.9535% ( 9) 00:09:24.888 11054.265 - 11106.904: 97.0534% ( 14) 00:09:24.888 11106.904 - 11159.544: 97.1176% ( 9) 00:09:24.888 11159.544 - 11212.183: 97.2103% ( 13) 00:09:24.888 11212.183 - 11264.822: 97.2959% ( 12) 00:09:24.888 11264.822 - 11317.462: 97.3530% ( 8) 00:09:24.888 11317.462 - 11370.101: 97.4172% ( 9) 00:09:24.888 11370.101 - 11422.741: 97.4886% ( 10) 00:09:24.888 11422.741 - 11475.380: 97.5528% ( 9) 00:09:24.888 11475.380 - 11528.019: 97.6313% ( 11) 00:09:24.888 11528.019 - 11580.659: 97.6955% ( 9) 00:09:24.888 11580.659 - 11633.298: 97.7811% ( 12) 00:09:24.888 11633.298 - 11685.937: 97.8311% ( 7) 00:09:24.888 11685.937 - 11738.577: 97.9024% ( 10) 00:09:24.888 11738.577 - 11791.216: 97.9737% ( 10) 00:09:24.888 11791.216 - 11843.855: 98.0166% ( 6) 00:09:24.888 11843.855 - 11896.495: 98.0594% ( 6) 00:09:24.888 11896.495 - 11949.134: 98.0808% ( 3) 00:09:24.888 11949.134 - 12001.773: 98.1022% ( 3) 00:09:24.888 12001.773 - 12054.413: 98.1236% ( 3) 00:09:24.888 12054.413 - 12107.052: 98.1450% ( 3) 00:09:24.888 12107.052 - 12159.692: 98.1664% ( 3) 00:09:24.888 12159.692 - 12212.331: 98.1735% ( 1) 00:09:24.888 17160.431 - 17265.709: 98.1807% ( 1) 00:09:24.888 17265.709 - 17370.988: 98.2235% ( 6) 00:09:24.888 17370.988 - 17476.267: 98.2591% ( 5) 00:09:24.888 17476.267 - 17581.545: 98.3091% ( 7) 00:09:24.888 17581.545 - 17686.824: 98.3519% ( 6) 00:09:24.888 17686.824 - 17792.103: 98.4161% ( 9) 00:09:24.888 17792.103 - 17897.382: 98.5017% ( 12) 00:09:24.888 17897.382 - 18002.660: 98.5873% ( 12) 00:09:24.888 18002.660 - 18107.939: 98.6658% ( 11) 00:09:24.888 18107.939 - 18213.218: 98.7514% ( 12) 00:09:24.888 18213.218 - 18318.496: 98.8370% ( 12) 00:09:24.888 18318.496 - 18423.775: 98.8799% ( 6) 00:09:24.888 18423.775 - 18529.054: 98.9227% ( 6) 00:09:24.888 18529.054 - 18634.333: 98.9512% ( 4) 00:09:24.888 18634.333 - 18739.611: 98.9869% ( 5) 00:09:24.888 18739.611 - 18844.890: 99.0297% ( 6) 00:09:24.888 18844.890 - 18950.169: 99.0654% ( 5) 00:09:24.888 18950.169 - 19055.447: 99.0868% ( 3) 00:09:24.888 28214.696 - 28425.253: 99.1724% ( 12) 00:09:24.888 28425.253 - 28635.810: 99.2651% ( 13) 00:09:24.888 28635.810 - 28846.368: 99.3436% ( 11) 00:09:24.888 28846.368 - 29056.925: 99.4221% ( 11) 00:09:24.888 29056.925 - 29267.483: 99.5148% ( 13) 00:09:24.888 29267.483 - 29478.040: 99.5434% ( 4) 00:09:24.888 34952.533 - 35163.091: 99.5505% ( 1) 00:09:24.888 35163.091 - 35373.648: 99.6433% ( 13) 00:09:24.888 35373.648 - 35584.206: 99.7146% ( 10) 00:09:24.888 35584.206 - 35794.763: 99.8002% ( 12) 00:09:24.888 35794.763 - 36005.320: 99.9001% ( 14) 00:09:24.888 36005.320 - 36215.878: 99.9857% ( 12) 00:09:24.888 36215.878 - 36426.435: 100.0000% ( 2) 00:09:24.888 00:09:24.888 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:24.888 ============================================================================== 00:09:24.888 Range in us Cumulative IO count 00:09:24.888 5632.411 - 5658.731: 0.0143% ( 2) 00:09:24.888 5658.731 - 5685.051: 0.0214% ( 1) 00:09:24.888 5685.051 - 5711.370: 0.0357% ( 2) 00:09:24.888 5711.370 - 5737.690: 0.0499% ( 2) 00:09:24.888 5737.690 - 5764.010: 0.0785% ( 4) 00:09:24.888 5764.010 - 5790.329: 0.0928% ( 2) 00:09:24.888 5790.329 - 5816.649: 0.1142% ( 3) 00:09:24.888 5816.649 - 5842.969: 0.1284% ( 2) 00:09:24.888 5842.969 - 5869.288: 0.1498% ( 3) 00:09:24.888 5869.288 - 5895.608: 0.1641% ( 2) 00:09:24.888 5895.608 - 5921.928: 0.1784% ( 2) 00:09:24.888 5921.928 - 5948.247: 0.1998% ( 3) 00:09:24.888 5948.247 - 5974.567: 0.2140% ( 2) 00:09:24.888 5974.567 - 6000.887: 0.2283% ( 2) 00:09:24.888 6000.887 - 6027.206: 0.2426% ( 2) 00:09:24.888 6027.206 - 6053.526: 0.2640% ( 3) 00:09:24.888 6053.526 - 6079.846: 0.2854% ( 3) 00:09:24.888 6079.846 - 6106.165: 0.2997% ( 2) 00:09:24.888 6106.165 - 6132.485: 0.3139% ( 2) 00:09:24.888 6132.485 - 6158.805: 0.3282% ( 2) 00:09:24.888 6158.805 - 6185.124: 0.3425% ( 2) 00:09:24.888 6185.124 - 6211.444: 0.3639% ( 3) 00:09:24.888 6211.444 - 6237.764: 0.3781% ( 2) 00:09:24.888 6237.764 - 6264.084: 0.3924% ( 2) 00:09:24.888 6264.084 - 6290.403: 0.4138% ( 3) 00:09:24.888 6290.403 - 6316.723: 0.4281% ( 2) 00:09:24.888 6316.723 - 6343.043: 0.4424% ( 2) 00:09:24.888 6343.043 - 6369.362: 0.4566% ( 2) 00:09:24.888 7948.543 - 8001.182: 0.5280% ( 10) 00:09:24.888 8001.182 - 8053.822: 0.7491% ( 31) 00:09:24.888 8053.822 - 8106.461: 1.4840% ( 103) 00:09:24.888 8106.461 - 8159.100: 2.5328% ( 147) 00:09:24.888 8159.100 - 8211.740: 4.2737% ( 244) 00:09:24.888 8211.740 - 8264.379: 6.3428% ( 290) 00:09:24.888 8264.379 - 8317.018: 9.1182% ( 389) 00:09:24.888 8317.018 - 8369.658: 12.2217% ( 435) 00:09:24.888 8369.658 - 8422.297: 15.9033% ( 516) 00:09:24.888 8422.297 - 8474.937: 19.9344% ( 565) 00:09:24.888 8474.937 - 8527.576: 24.2509% ( 605) 00:09:24.888 8527.576 - 8580.215: 28.8813% ( 649) 00:09:24.888 8580.215 - 8632.855: 33.8114% ( 691) 00:09:24.888 8632.855 - 8685.494: 38.8984% ( 713) 00:09:24.888 8685.494 - 8738.133: 44.0497% ( 722) 00:09:24.888 8738.133 - 8790.773: 49.4364% ( 755) 00:09:24.888 8790.773 - 8843.412: 54.6090% ( 725) 00:09:24.888 8843.412 - 8896.051: 59.5248% ( 689) 00:09:24.888 8896.051 - 8948.691: 64.1695% ( 651) 00:09:24.888 8948.691 - 9001.330: 68.5574% ( 615) 00:09:24.888 9001.330 - 9053.969: 72.6027% ( 567) 00:09:24.888 9053.969 - 9106.609: 76.0702% ( 486) 00:09:24.888 9106.609 - 9159.248: 79.1381% ( 430) 00:09:24.888 9159.248 - 9211.888: 81.9421% ( 393) 00:09:24.888 9211.888 - 9264.527: 84.2038% ( 317) 00:09:24.888 9264.527 - 9317.166: 86.1872% ( 278) 00:09:24.888 9317.166 - 9369.806: 87.8781% ( 237) 00:09:24.888 9369.806 - 9422.445: 89.2266% ( 189) 00:09:24.888 9422.445 - 9475.084: 90.3539% ( 158) 00:09:24.888 9475.084 - 9527.724: 91.2600% ( 127) 00:09:24.888 9527.724 - 9580.363: 92.0519% ( 111) 00:09:24.888 9580.363 - 9633.002: 92.7440% ( 97) 00:09:24.888 9633.002 - 9685.642: 93.3719% ( 88) 00:09:24.888 9685.642 - 9738.281: 93.9141% ( 76) 00:09:24.888 9738.281 - 9790.920: 94.3636% ( 63) 00:09:24.888 9790.920 - 9843.560: 94.6561% ( 41) 00:09:24.888 9843.560 - 9896.199: 94.9415% ( 40) 00:09:24.888 9896.199 - 9948.839: 95.2055% ( 37) 00:09:24.888 9948.839 - 10001.478: 95.4623% ( 36) 00:09:24.888 10001.478 - 10054.117: 95.6978% ( 33) 00:09:24.888 10054.117 - 10106.757: 95.8476% ( 21) 00:09:24.888 10106.757 - 10159.396: 96.0046% ( 22) 00:09:24.888 10159.396 - 10212.035: 96.1401% ( 19) 00:09:24.888 10212.035 - 10264.675: 96.2686% ( 18) 00:09:24.888 10264.675 - 10317.314: 96.3827% ( 16) 00:09:24.888 10317.314 - 10369.953: 96.4826% ( 14) 00:09:24.888 10369.953 - 10422.593: 96.5468% ( 9) 00:09:24.888 10422.593 - 10475.232: 96.5896% ( 6) 00:09:24.888 10475.232 - 10527.871: 96.6324% ( 6) 00:09:24.888 10527.871 - 10580.511: 96.6895% ( 8) 00:09:24.888 10580.511 - 10633.150: 96.7466% ( 8) 00:09:24.888 10633.150 - 10685.790: 96.8037% ( 8) 00:09:24.888 10685.790 - 10738.429: 96.8607% ( 8) 00:09:24.888 10738.429 - 10791.068: 96.9107% ( 7) 00:09:24.888 10791.068 - 10843.708: 96.9678% ( 8) 00:09:24.888 10843.708 - 10896.347: 97.0177% ( 7) 00:09:24.888 10896.347 - 10948.986: 97.0534% ( 5) 00:09:24.888 10948.986 - 11001.626: 97.0819% ( 4) 00:09:24.888 11001.626 - 11054.265: 97.0962% ( 2) 00:09:24.888 11054.265 - 11106.904: 97.1604% ( 9) 00:09:24.888 11106.904 - 11159.544: 97.2103% ( 7) 00:09:24.888 11159.544 - 11212.183: 97.2888% ( 11) 00:09:24.888 11212.183 - 11264.822: 97.3744% ( 12) 00:09:24.888 11264.822 - 11317.462: 97.4672% ( 13) 00:09:24.888 11317.462 - 11370.101: 97.5457% ( 11) 00:09:24.888 11370.101 - 11422.741: 97.6027% ( 8) 00:09:24.888 11422.741 - 11475.380: 97.6527% ( 7) 00:09:24.888 11475.380 - 11528.019: 97.7098% ( 8) 00:09:24.888 11528.019 - 11580.659: 97.7526% ( 6) 00:09:24.888 11580.659 - 11633.298: 97.8025% ( 7) 00:09:24.888 11633.298 - 11685.937: 97.8525% ( 7) 00:09:24.889 11685.937 - 11738.577: 97.9095% ( 8) 00:09:24.889 11738.577 - 11791.216: 97.9523% ( 6) 00:09:24.889 11791.216 - 11843.855: 98.0023% ( 7) 00:09:24.889 11843.855 - 11896.495: 98.0522% ( 7) 00:09:24.889 11896.495 - 11949.134: 98.1022% ( 7) 00:09:24.889 11949.134 - 12001.773: 98.1378% ( 5) 00:09:24.889 12001.773 - 12054.413: 98.1664% ( 4) 00:09:24.889 12054.413 - 12107.052: 98.1735% ( 1) 00:09:24.889 16634.037 - 16739.316: 98.1807% ( 1) 00:09:24.889 16739.316 - 16844.594: 98.2377% ( 8) 00:09:24.889 16844.594 - 16949.873: 98.2877% ( 7) 00:09:24.889 16949.873 - 17055.152: 98.3376% ( 7) 00:09:24.889 17055.152 - 17160.431: 98.3876% ( 7) 00:09:24.889 17160.431 - 17265.709: 98.4446% ( 8) 00:09:24.889 17265.709 - 17370.988: 98.5017% ( 8) 00:09:24.889 17370.988 - 17476.267: 98.6016% ( 14) 00:09:24.889 17476.267 - 17581.545: 98.7086% ( 15) 00:09:24.889 17581.545 - 17686.824: 98.7942% ( 12) 00:09:24.889 17686.824 - 17792.103: 98.8513% ( 8) 00:09:24.889 17792.103 - 17897.382: 98.8941% ( 6) 00:09:24.889 17897.382 - 18002.660: 98.9441% ( 7) 00:09:24.889 18002.660 - 18107.939: 98.9940% ( 7) 00:09:24.889 18107.939 - 18213.218: 99.0439% ( 7) 00:09:24.889 18213.218 - 18318.496: 99.0868% ( 6) 00:09:24.889 28004.138 - 28214.696: 99.1082% ( 3) 00:09:24.889 28214.696 - 28425.253: 99.1938% ( 12) 00:09:24.889 28425.253 - 28635.810: 99.2865% ( 13) 00:09:24.889 28635.810 - 28846.368: 99.3793% ( 13) 00:09:24.889 28846.368 - 29056.925: 99.4792% ( 14) 00:09:24.889 29056.925 - 29267.483: 99.5434% ( 9) 00:09:24.889 34741.976 - 34952.533: 99.6005% ( 8) 00:09:24.889 34952.533 - 35163.091: 99.6647% ( 9) 00:09:24.889 35163.091 - 35373.648: 99.7646% ( 14) 00:09:24.889 35373.648 - 35584.206: 99.8573% ( 13) 00:09:24.889 35584.206 - 35794.763: 99.9572% ( 14) 00:09:24.889 35794.763 - 36005.320: 100.0000% ( 6) 00:09:24.889 00:09:24.889 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:24.889 ============================================================================== 00:09:24.889 Range in us Cumulative IO count 00:09:24.889 4842.821 - 4869.141: 0.0143% ( 2) 00:09:24.889 4869.141 - 4895.460: 0.0285% ( 2) 00:09:24.889 4895.460 - 4921.780: 0.0499% ( 3) 00:09:24.889 4921.780 - 4948.100: 0.0642% ( 2) 00:09:24.889 4948.100 - 4974.419: 0.0856% ( 3) 00:09:24.889 4974.419 - 5000.739: 0.1070% ( 3) 00:09:24.889 5000.739 - 5027.059: 0.1213% ( 2) 00:09:24.889 5027.059 - 5053.378: 0.1427% ( 3) 00:09:24.889 5053.378 - 5079.698: 0.1570% ( 2) 00:09:24.889 5079.698 - 5106.018: 0.1784% ( 3) 00:09:24.889 5106.018 - 5132.337: 0.1926% ( 2) 00:09:24.889 5132.337 - 5158.657: 0.2140% ( 3) 00:09:24.889 5158.657 - 5184.977: 0.2283% ( 2) 00:09:24.889 5184.977 - 5211.296: 0.2426% ( 2) 00:09:24.889 5211.296 - 5237.616: 0.2568% ( 2) 00:09:24.889 5237.616 - 5263.936: 0.2783% ( 3) 00:09:24.889 5263.936 - 5290.255: 0.2925% ( 2) 00:09:24.889 5290.255 - 5316.575: 0.3139% ( 3) 00:09:24.889 5316.575 - 5342.895: 0.3282% ( 2) 00:09:24.889 5342.895 - 5369.214: 0.3496% ( 3) 00:09:24.889 5369.214 - 5395.534: 0.3567% ( 1) 00:09:24.889 5395.534 - 5421.854: 0.3781% ( 3) 00:09:24.889 5421.854 - 5448.173: 0.3995% ( 3) 00:09:24.889 5448.173 - 5474.493: 0.4138% ( 2) 00:09:24.889 5474.493 - 5500.813: 0.4281% ( 2) 00:09:24.889 5500.813 - 5527.133: 0.4495% ( 3) 00:09:24.889 5527.133 - 5553.452: 0.4566% ( 1) 00:09:24.889 7369.510 - 7422.149: 0.4780% ( 3) 00:09:24.889 7422.149 - 7474.789: 0.5137% ( 5) 00:09:24.889 7474.789 - 7527.428: 0.5494% ( 5) 00:09:24.889 7527.428 - 7580.067: 0.5779% ( 4) 00:09:24.889 7580.067 - 7632.707: 0.6136% ( 5) 00:09:24.889 7632.707 - 7685.346: 0.6493% ( 5) 00:09:24.889 7685.346 - 7737.986: 0.6849% ( 5) 00:09:24.889 7737.986 - 7790.625: 0.7135% ( 4) 00:09:24.889 7790.625 - 7843.264: 0.7491% ( 5) 00:09:24.889 7843.264 - 7895.904: 0.7848% ( 5) 00:09:24.889 7895.904 - 7948.543: 0.8205% ( 5) 00:09:24.889 7948.543 - 8001.182: 0.8633% ( 6) 00:09:24.889 8001.182 - 8053.822: 1.1701% ( 43) 00:09:24.889 8053.822 - 8106.461: 1.9264% ( 106) 00:09:24.889 8106.461 - 8159.100: 3.0893% ( 163) 00:09:24.889 8159.100 - 8211.740: 4.6946% ( 225) 00:09:24.889 8211.740 - 8264.379: 6.7566% ( 289) 00:09:24.889 8264.379 - 8317.018: 9.4321% ( 375) 00:09:24.889 8317.018 - 8369.658: 12.5642% ( 439) 00:09:24.889 8369.658 - 8422.297: 16.2529% ( 517) 00:09:24.889 8422.297 - 8474.937: 20.3410% ( 573) 00:09:24.889 8474.937 - 8527.576: 24.5862% ( 595) 00:09:24.889 8527.576 - 8580.215: 29.3022% ( 661) 00:09:24.889 8580.215 - 8632.855: 34.1396% ( 678) 00:09:24.889 8632.855 - 8685.494: 39.2266% ( 713) 00:09:24.889 8685.494 - 8738.133: 44.4706% ( 735) 00:09:24.889 8738.133 - 8790.773: 49.7931% ( 746) 00:09:24.889 8790.773 - 8843.412: 55.0300% ( 734) 00:09:24.889 8843.412 - 8896.051: 59.9458% ( 689) 00:09:24.889 8896.051 - 8948.691: 64.7189% ( 669) 00:09:24.889 8948.691 - 9001.330: 69.0354% ( 605) 00:09:24.889 9001.330 - 9053.969: 73.0950% ( 569) 00:09:24.889 9053.969 - 9106.609: 76.5482% ( 484) 00:09:24.889 9106.609 - 9159.248: 79.7303% ( 446) 00:09:24.889 9159.248 - 9211.888: 82.4914% ( 387) 00:09:24.889 9211.888 - 9264.527: 84.7603% ( 318) 00:09:24.889 9264.527 - 9317.166: 86.6795% ( 269) 00:09:24.889 9317.166 - 9369.806: 88.3134% ( 229) 00:09:24.889 9369.806 - 9422.445: 89.5691% ( 176) 00:09:24.889 9422.445 - 9475.084: 90.5965% ( 144) 00:09:24.889 9475.084 - 9527.724: 91.4241% ( 116) 00:09:24.889 9527.724 - 9580.363: 92.1376% ( 100) 00:09:24.889 9580.363 - 9633.002: 92.7725% ( 89) 00:09:24.889 9633.002 - 9685.642: 93.3790% ( 85) 00:09:24.889 9685.642 - 9738.281: 93.8499% ( 66) 00:09:24.889 9738.281 - 9790.920: 94.2851% ( 61) 00:09:24.889 9790.920 - 9843.560: 94.5919% ( 43) 00:09:24.889 9843.560 - 9896.199: 94.8916% ( 42) 00:09:24.889 9896.199 - 9948.839: 95.1555% ( 37) 00:09:24.889 9948.839 - 10001.478: 95.3624% ( 29) 00:09:24.889 10001.478 - 10054.117: 95.5265% ( 23) 00:09:24.889 10054.117 - 10106.757: 95.6978% ( 24) 00:09:24.889 10106.757 - 10159.396: 95.8619% ( 23) 00:09:24.889 10159.396 - 10212.035: 96.0260% ( 23) 00:09:24.889 10212.035 - 10264.675: 96.1544% ( 18) 00:09:24.889 10264.675 - 10317.314: 96.2828% ( 18) 00:09:24.889 10317.314 - 10369.953: 96.4255% ( 20) 00:09:24.889 10369.953 - 10422.593: 96.5611% ( 19) 00:09:24.889 10422.593 - 10475.232: 96.6966% ( 19) 00:09:24.889 10475.232 - 10527.871: 96.8108% ( 16) 00:09:24.889 10527.871 - 10580.511: 96.9107% ( 14) 00:09:24.889 10580.511 - 10633.150: 96.9963% ( 12) 00:09:24.889 10633.150 - 10685.790: 97.0676% ( 10) 00:09:24.889 10685.790 - 10738.429: 97.1247% ( 8) 00:09:24.889 10738.429 - 10791.068: 97.1889% ( 9) 00:09:24.889 10791.068 - 10843.708: 97.2317% ( 6) 00:09:24.889 10843.708 - 10896.347: 97.2959% ( 9) 00:09:24.889 10896.347 - 10948.986: 97.3530% ( 8) 00:09:24.889 10948.986 - 11001.626: 97.3816% ( 4) 00:09:24.889 11001.626 - 11054.265: 97.4172% ( 5) 00:09:24.889 11054.265 - 11106.904: 97.4386% ( 3) 00:09:24.889 11106.904 - 11159.544: 97.4672% ( 4) 00:09:24.889 11159.544 - 11212.183: 97.4814% ( 2) 00:09:24.889 11212.183 - 11264.822: 97.5100% ( 4) 00:09:24.889 11264.822 - 11317.462: 97.5314% ( 3) 00:09:24.889 11317.462 - 11370.101: 97.5599% ( 4) 00:09:24.889 11370.101 - 11422.741: 97.5813% ( 3) 00:09:24.889 11422.741 - 11475.380: 97.6313% ( 7) 00:09:24.889 11475.380 - 11528.019: 97.6812% ( 7) 00:09:24.889 11528.019 - 11580.659: 97.7240% ( 6) 00:09:24.889 11580.659 - 11633.298: 97.7882% ( 9) 00:09:24.889 11633.298 - 11685.937: 97.8382% ( 7) 00:09:24.889 11685.937 - 11738.577: 97.8667% ( 4) 00:09:24.889 11738.577 - 11791.216: 97.8953% ( 4) 00:09:24.889 11791.216 - 11843.855: 97.9167% ( 3) 00:09:24.889 11843.855 - 11896.495: 97.9381% ( 3) 00:09:24.889 11896.495 - 11949.134: 97.9666% ( 4) 00:09:24.889 11949.134 - 12001.773: 97.9880% ( 3) 00:09:24.889 12001.773 - 12054.413: 98.0166% ( 4) 00:09:24.889 12054.413 - 12107.052: 98.0451% ( 4) 00:09:24.889 12107.052 - 12159.692: 98.0736% ( 4) 00:09:24.889 12159.692 - 12212.331: 98.0950% ( 3) 00:09:24.889 12212.331 - 12264.970: 98.1164% ( 3) 00:09:24.889 12264.970 - 12317.610: 98.1378% ( 3) 00:09:24.889 12317.610 - 12370.249: 98.1664% ( 4) 00:09:24.889 12370.249 - 12422.888: 98.1735% ( 1) 00:09:24.889 16949.873 - 17055.152: 98.2092% ( 5) 00:09:24.889 17055.152 - 17160.431: 98.2591% ( 7) 00:09:24.889 17160.431 - 17265.709: 98.3091% ( 7) 00:09:24.889 17265.709 - 17370.988: 98.3590% ( 7) 00:09:24.889 17370.988 - 17476.267: 98.4090% ( 7) 00:09:24.889 17476.267 - 17581.545: 98.4874% ( 11) 00:09:24.889 17581.545 - 17686.824: 98.5873% ( 14) 00:09:24.889 17686.824 - 17792.103: 98.6872% ( 14) 00:09:24.889 17792.103 - 17897.382: 98.7942% ( 15) 00:09:24.889 17897.382 - 18002.660: 98.8584% ( 9) 00:09:24.889 18002.660 - 18107.939: 98.9084% ( 7) 00:09:24.889 18107.939 - 18213.218: 98.9583% ( 7) 00:09:24.889 18213.218 - 18318.496: 99.0154% ( 8) 00:09:24.889 18318.496 - 18423.775: 99.0654% ( 7) 00:09:24.889 18423.775 - 18529.054: 99.0868% ( 3) 00:09:24.889 28635.810 - 28846.368: 99.1795% ( 13) 00:09:24.889 28846.368 - 29056.925: 99.2723% ( 13) 00:09:24.889 29056.925 - 29267.483: 99.3721% ( 14) 00:09:24.889 29267.483 - 29478.040: 99.4720% ( 14) 00:09:24.889 29478.040 - 29688.598: 99.5434% ( 10) 00:09:24.890 34952.533 - 35163.091: 99.6147% ( 10) 00:09:24.890 35163.091 - 35373.648: 99.7217% ( 15) 00:09:24.890 35373.648 - 35584.206: 99.8145% ( 13) 00:09:24.890 35584.206 - 35794.763: 99.9072% ( 13) 00:09:24.890 35794.763 - 36005.320: 100.0000% ( 13) 00:09:24.890 00:09:24.890 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:24.890 ============================================================================== 00:09:24.890 Range in us Cumulative IO count 00:09:24.890 4500.665 - 4526.985: 0.0071% ( 1) 00:09:24.890 4526.985 - 4553.304: 0.0214% ( 2) 00:09:24.890 4553.304 - 4579.624: 0.0357% ( 2) 00:09:24.890 4579.624 - 4605.944: 0.0499% ( 2) 00:09:24.890 4605.944 - 4632.263: 0.0642% ( 2) 00:09:24.890 4632.263 - 4658.583: 0.0928% ( 4) 00:09:24.890 4658.583 - 4684.903: 0.1070% ( 2) 00:09:24.890 4684.903 - 4711.222: 0.1284% ( 3) 00:09:24.890 4711.222 - 4737.542: 0.1427% ( 2) 00:09:24.890 4737.542 - 4763.862: 0.1641% ( 3) 00:09:24.890 4763.862 - 4790.182: 0.1712% ( 1) 00:09:24.890 4790.182 - 4816.501: 0.1926% ( 3) 00:09:24.890 4816.501 - 4842.821: 0.2069% ( 2) 00:09:24.890 4842.821 - 4869.141: 0.2212% ( 2) 00:09:24.890 4869.141 - 4895.460: 0.2426% ( 3) 00:09:24.890 4895.460 - 4921.780: 0.2568% ( 2) 00:09:24.890 4921.780 - 4948.100: 0.2783% ( 3) 00:09:24.890 4948.100 - 4974.419: 0.2925% ( 2) 00:09:24.890 4974.419 - 5000.739: 0.3139% ( 3) 00:09:24.890 5000.739 - 5027.059: 0.3282% ( 2) 00:09:24.890 5027.059 - 5053.378: 0.3425% ( 2) 00:09:24.890 5053.378 - 5079.698: 0.3639% ( 3) 00:09:24.890 5079.698 - 5106.018: 0.3781% ( 2) 00:09:24.890 5106.018 - 5132.337: 0.3995% ( 3) 00:09:24.890 5132.337 - 5158.657: 0.4209% ( 3) 00:09:24.890 5158.657 - 5184.977: 0.4352% ( 2) 00:09:24.890 5184.977 - 5211.296: 0.4566% ( 3) 00:09:24.890 7001.035 - 7053.674: 0.4780% ( 3) 00:09:24.890 7053.674 - 7106.313: 0.5137% ( 5) 00:09:24.890 7106.313 - 7158.953: 0.5494% ( 5) 00:09:24.890 7158.953 - 7211.592: 0.5922% ( 6) 00:09:24.890 7211.592 - 7264.231: 0.6207% ( 4) 00:09:24.890 7264.231 - 7316.871: 0.6564% ( 5) 00:09:24.890 7316.871 - 7369.510: 0.6849% ( 4) 00:09:24.890 7369.510 - 7422.149: 0.7135% ( 4) 00:09:24.890 7422.149 - 7474.789: 0.7420% ( 4) 00:09:24.890 7474.789 - 7527.428: 0.7705% ( 4) 00:09:24.890 7527.428 - 7580.067: 0.8062% ( 5) 00:09:24.890 7580.067 - 7632.707: 0.8348% ( 4) 00:09:24.890 7632.707 - 7685.346: 0.8704% ( 5) 00:09:24.890 7685.346 - 7737.986: 0.9061% ( 5) 00:09:24.890 7737.986 - 7790.625: 0.9132% ( 1) 00:09:24.890 7948.543 - 8001.182: 0.9346% ( 3) 00:09:24.890 8001.182 - 8053.822: 1.1915% ( 36) 00:09:24.890 8053.822 - 8106.461: 1.8622% ( 94) 00:09:24.890 8106.461 - 8159.100: 2.8896% ( 144) 00:09:24.890 8159.100 - 8211.740: 4.5805% ( 237) 00:09:24.890 8211.740 - 8264.379: 6.8564% ( 319) 00:09:24.890 8264.379 - 8317.018: 9.4178% ( 359) 00:09:24.890 8317.018 - 8369.658: 12.4358% ( 423) 00:09:24.890 8369.658 - 8422.297: 16.1316% ( 518) 00:09:24.890 8422.297 - 8474.937: 20.2768% ( 581) 00:09:24.890 8474.937 - 8527.576: 24.6647% ( 615) 00:09:24.890 8527.576 - 8580.215: 29.1881% ( 634) 00:09:24.890 8580.215 - 8632.855: 34.2680% ( 712) 00:09:24.890 8632.855 - 8685.494: 39.4763% ( 730) 00:09:24.890 8685.494 - 8738.133: 44.7346% ( 737) 00:09:24.890 8738.133 - 8790.773: 50.0428% ( 744) 00:09:24.890 8790.773 - 8843.412: 55.2155% ( 725) 00:09:24.890 8843.412 - 8896.051: 60.1027% ( 685) 00:09:24.890 8896.051 - 8948.691: 64.9401% ( 678) 00:09:24.890 8948.691 - 9001.330: 69.3065% ( 612) 00:09:24.890 9001.330 - 9053.969: 73.2306% ( 550) 00:09:24.890 9053.969 - 9106.609: 76.8479% ( 507) 00:09:24.890 9106.609 - 9159.248: 79.9515% ( 435) 00:09:24.890 9159.248 - 9211.888: 82.6769% ( 382) 00:09:24.890 9211.888 - 9264.527: 84.9743% ( 322) 00:09:24.890 9264.527 - 9317.166: 86.8793% ( 267) 00:09:24.890 9317.166 - 9369.806: 88.4346% ( 218) 00:09:24.890 9369.806 - 9422.445: 89.7546% ( 185) 00:09:24.890 9422.445 - 9475.084: 90.7392% ( 138) 00:09:24.890 9475.084 - 9527.724: 91.5382% ( 112) 00:09:24.890 9527.724 - 9580.363: 92.2089% ( 94) 00:09:24.890 9580.363 - 9633.002: 92.8510% ( 90) 00:09:24.890 9633.002 - 9685.642: 93.4432% ( 83) 00:09:24.890 9685.642 - 9738.281: 93.9712% ( 74) 00:09:24.890 9738.281 - 9790.920: 94.3779% ( 57) 00:09:24.890 9790.920 - 9843.560: 94.7132% ( 47) 00:09:24.890 9843.560 - 9896.199: 95.0128% ( 42) 00:09:24.890 9896.199 - 9948.839: 95.2197% ( 29) 00:09:24.890 9948.839 - 10001.478: 95.4124% ( 27) 00:09:24.890 10001.478 - 10054.117: 95.5979% ( 26) 00:09:24.890 10054.117 - 10106.757: 95.7620% ( 23) 00:09:24.890 10106.757 - 10159.396: 95.8975% ( 19) 00:09:24.890 10159.396 - 10212.035: 96.0046% ( 15) 00:09:24.890 10212.035 - 10264.675: 96.1187% ( 16) 00:09:24.890 10264.675 - 10317.314: 96.2400% ( 17) 00:09:24.890 10317.314 - 10369.953: 96.4398% ( 28) 00:09:24.890 10369.953 - 10422.593: 96.5825% ( 20) 00:09:24.890 10422.593 - 10475.232: 96.6824% ( 14) 00:09:24.890 10475.232 - 10527.871: 96.7822% ( 14) 00:09:24.890 10527.871 - 10580.511: 96.9178% ( 19) 00:09:24.890 10580.511 - 10633.150: 96.9892% ( 10) 00:09:24.890 10633.150 - 10685.790: 97.0676% ( 11) 00:09:24.890 10685.790 - 10738.429: 97.1533% ( 12) 00:09:24.890 10738.429 - 10791.068: 97.2246% ( 10) 00:09:24.890 10791.068 - 10843.708: 97.2817% ( 8) 00:09:24.890 10843.708 - 10896.347: 97.3316% ( 7) 00:09:24.890 10896.347 - 10948.986: 97.3887% ( 8) 00:09:24.890 10948.986 - 11001.626: 97.4458% ( 8) 00:09:24.890 11001.626 - 11054.265: 97.4957% ( 7) 00:09:24.890 11054.265 - 11106.904: 97.5171% ( 3) 00:09:24.890 11106.904 - 11159.544: 97.5457% ( 4) 00:09:24.890 11159.544 - 11212.183: 97.5742% ( 4) 00:09:24.890 11212.183 - 11264.822: 97.5956% ( 3) 00:09:24.890 11264.822 - 11317.462: 97.6241% ( 4) 00:09:24.890 11317.462 - 11370.101: 97.6527% ( 4) 00:09:24.890 11370.101 - 11422.741: 97.6741% ( 3) 00:09:24.890 11422.741 - 11475.380: 97.7026% ( 4) 00:09:24.890 11475.380 - 11528.019: 97.7169% ( 2) 00:09:24.890 11528.019 - 11580.659: 97.7383% ( 3) 00:09:24.890 11580.659 - 11633.298: 97.7526% ( 2) 00:09:24.890 11633.298 - 11685.937: 97.7811% ( 4) 00:09:24.890 11685.937 - 11738.577: 97.8096% ( 4) 00:09:24.890 11738.577 - 11791.216: 97.8311% ( 3) 00:09:24.890 11791.216 - 11843.855: 97.8596% ( 4) 00:09:24.890 11843.855 - 11896.495: 97.8881% ( 4) 00:09:24.890 11896.495 - 11949.134: 97.9167% ( 4) 00:09:24.890 11949.134 - 12001.773: 97.9381% ( 3) 00:09:24.890 12001.773 - 12054.413: 97.9666% ( 4) 00:09:24.890 12054.413 - 12107.052: 97.9951% ( 4) 00:09:24.890 12107.052 - 12159.692: 98.0166% ( 3) 00:09:24.890 12159.692 - 12212.331: 98.0451% ( 4) 00:09:24.890 12212.331 - 12264.970: 98.0736% ( 4) 00:09:24.890 12264.970 - 12317.610: 98.0950% ( 3) 00:09:24.890 12317.610 - 12370.249: 98.1236% ( 4) 00:09:24.890 12370.249 - 12422.888: 98.1450% ( 3) 00:09:24.890 12422.888 - 12475.528: 98.1735% ( 4) 00:09:24.890 17055.152 - 17160.431: 98.2163% ( 6) 00:09:24.890 17160.431 - 17265.709: 98.2734% ( 8) 00:09:24.890 17265.709 - 17370.988: 98.3233% ( 7) 00:09:24.890 17370.988 - 17476.267: 98.3733% ( 7) 00:09:24.890 17476.267 - 17581.545: 98.4304% ( 8) 00:09:24.890 17581.545 - 17686.824: 98.5303% ( 14) 00:09:24.890 17686.824 - 17792.103: 98.6301% ( 14) 00:09:24.890 17792.103 - 17897.382: 98.7372% ( 15) 00:09:24.890 17897.382 - 18002.660: 98.8370% ( 14) 00:09:24.890 18002.660 - 18107.939: 98.8941% ( 8) 00:09:24.890 18107.939 - 18213.218: 98.9441% ( 7) 00:09:24.890 18213.218 - 18318.496: 99.0011% ( 8) 00:09:24.890 18318.496 - 18423.775: 99.0439% ( 6) 00:09:24.890 18423.775 - 18529.054: 99.0868% ( 6) 00:09:24.890 28214.696 - 28425.253: 99.1082% ( 3) 00:09:24.890 28425.253 - 28635.810: 99.1938% ( 12) 00:09:24.890 28635.810 - 28846.368: 99.2865% ( 13) 00:09:24.890 28846.368 - 29056.925: 99.3864% ( 14) 00:09:24.890 29056.925 - 29267.483: 99.4863% ( 14) 00:09:24.890 29267.483 - 29478.040: 99.5434% ( 8) 00:09:24.890 34531.418 - 34741.976: 99.6361% ( 13) 00:09:24.890 34741.976 - 34952.533: 99.7360% ( 14) 00:09:24.890 34952.533 - 35163.091: 99.8288% ( 13) 00:09:24.890 35163.091 - 35373.648: 99.9287% ( 14) 00:09:24.890 35373.648 - 35584.206: 100.0000% ( 10) 00:09:24.890 00:09:24.890 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:24.890 ============================================================================== 00:09:24.891 Range in us Cumulative IO count 00:09:24.891 4211.149 - 4237.468: 0.0214% ( 3) 00:09:24.891 4237.468 - 4263.788: 0.0428% ( 3) 00:09:24.891 4263.788 - 4290.108: 0.0571% ( 2) 00:09:24.891 4290.108 - 4316.427: 0.0713% ( 2) 00:09:24.891 4316.427 - 4342.747: 0.0856% ( 2) 00:09:24.891 4342.747 - 4369.067: 0.1070% ( 3) 00:09:24.891 4369.067 - 4395.386: 0.1213% ( 2) 00:09:24.891 4395.386 - 4421.706: 0.1427% ( 3) 00:09:24.891 4421.706 - 4448.026: 0.1570% ( 2) 00:09:24.891 4448.026 - 4474.345: 0.1784% ( 3) 00:09:24.891 4474.345 - 4500.665: 0.1926% ( 2) 00:09:24.891 4500.665 - 4526.985: 0.2069% ( 2) 00:09:24.891 4526.985 - 4553.304: 0.2283% ( 3) 00:09:24.891 4553.304 - 4579.624: 0.2354% ( 1) 00:09:24.891 4579.624 - 4605.944: 0.2568% ( 3) 00:09:24.891 4605.944 - 4632.263: 0.2711% ( 2) 00:09:24.891 4632.263 - 4658.583: 0.2925% ( 3) 00:09:24.891 4658.583 - 4684.903: 0.3068% ( 2) 00:09:24.891 4684.903 - 4711.222: 0.3282% ( 3) 00:09:24.891 4711.222 - 4737.542: 0.3425% ( 2) 00:09:24.891 4737.542 - 4763.862: 0.3639% ( 3) 00:09:24.891 4763.862 - 4790.182: 0.3781% ( 2) 00:09:24.891 4790.182 - 4816.501: 0.3995% ( 3) 00:09:24.891 4816.501 - 4842.821: 0.4138% ( 2) 00:09:24.891 4842.821 - 4869.141: 0.4281% ( 2) 00:09:24.891 4869.141 - 4895.460: 0.4495% ( 3) 00:09:24.891 4895.460 - 4921.780: 0.4566% ( 1) 00:09:24.891 6737.838 - 6790.477: 0.5137% ( 8) 00:09:24.891 6790.477 - 6843.116: 0.5351% ( 3) 00:09:24.891 6843.116 - 6895.756: 0.5708% ( 5) 00:09:24.891 6895.756 - 6948.395: 0.5993% ( 4) 00:09:24.891 6948.395 - 7001.035: 0.6350% ( 5) 00:09:24.891 7001.035 - 7053.674: 0.6635% ( 4) 00:09:24.891 7053.674 - 7106.313: 0.6992% ( 5) 00:09:24.891 7106.313 - 7158.953: 0.7349% ( 5) 00:09:24.891 7158.953 - 7211.592: 0.7705% ( 5) 00:09:24.891 7211.592 - 7264.231: 0.7920% ( 3) 00:09:24.891 7264.231 - 7316.871: 0.8205% ( 4) 00:09:24.891 7316.871 - 7369.510: 0.8562% ( 5) 00:09:24.891 7369.510 - 7422.149: 0.8918% ( 5) 00:09:24.891 7422.149 - 7474.789: 0.9132% ( 3) 00:09:24.891 7948.543 - 8001.182: 0.9632% ( 7) 00:09:24.891 8001.182 - 8053.822: 1.3057% ( 48) 00:09:24.891 8053.822 - 8106.461: 2.0976% ( 111) 00:09:24.891 8106.461 - 8159.100: 3.1107% ( 142) 00:09:24.891 8159.100 - 8211.740: 4.8516% ( 244) 00:09:24.891 8211.740 - 8264.379: 7.0348% ( 306) 00:09:24.891 8264.379 - 8317.018: 9.6961% ( 373) 00:09:24.891 8317.018 - 8369.658: 12.9138% ( 451) 00:09:24.891 8369.658 - 8422.297: 16.6239% ( 520) 00:09:24.891 8422.297 - 8474.937: 20.7263% ( 575) 00:09:24.891 8474.937 - 8527.576: 25.0071% ( 600) 00:09:24.891 8527.576 - 8580.215: 29.6019% ( 644) 00:09:24.891 8580.215 - 8632.855: 34.4678% ( 682) 00:09:24.891 8632.855 - 8685.494: 39.4977% ( 705) 00:09:24.891 8685.494 - 8738.133: 44.6561% ( 723) 00:09:24.891 8738.133 - 8790.773: 49.9501% ( 742) 00:09:24.891 8790.773 - 8843.412: 55.0300% ( 712) 00:09:24.891 8843.412 - 8896.051: 59.9030% ( 683) 00:09:24.891 8896.051 - 8948.691: 64.6475% ( 665) 00:09:24.891 8948.691 - 9001.330: 69.1210% ( 627) 00:09:24.891 9001.330 - 9053.969: 73.1307% ( 562) 00:09:24.891 9053.969 - 9106.609: 76.8193% ( 517) 00:09:24.891 9106.609 - 9159.248: 79.8659% ( 427) 00:09:24.891 9159.248 - 9211.888: 82.5200% ( 372) 00:09:24.891 9211.888 - 9264.527: 84.7959% ( 319) 00:09:24.891 9264.527 - 9317.166: 86.7080% ( 268) 00:09:24.891 9317.166 - 9369.806: 88.3704% ( 233) 00:09:24.891 9369.806 - 9422.445: 89.6904% ( 185) 00:09:24.891 9422.445 - 9475.084: 90.8034% ( 156) 00:09:24.891 9475.084 - 9527.724: 91.6453% ( 118) 00:09:24.891 9527.724 - 9580.363: 92.3017% ( 92) 00:09:24.891 9580.363 - 9633.002: 92.8938% ( 83) 00:09:24.891 9633.002 - 9685.642: 93.5360% ( 90) 00:09:24.891 9685.642 - 9738.281: 94.0711% ( 75) 00:09:24.891 9738.281 - 9790.920: 94.5277% ( 64) 00:09:24.891 9790.920 - 9843.560: 94.9201% ( 55) 00:09:24.891 9843.560 - 9896.199: 95.2554% ( 47) 00:09:24.891 9896.199 - 9948.839: 95.4766% ( 31) 00:09:24.891 9948.839 - 10001.478: 95.6478% ( 24) 00:09:24.891 10001.478 - 10054.117: 95.8048% ( 22) 00:09:24.891 10054.117 - 10106.757: 95.9404% ( 19) 00:09:24.891 10106.757 - 10159.396: 96.0402% ( 14) 00:09:24.891 10159.396 - 10212.035: 96.1473% ( 15) 00:09:24.891 10212.035 - 10264.675: 96.2400% ( 13) 00:09:24.891 10264.675 - 10317.314: 96.3399% ( 14) 00:09:24.891 10317.314 - 10369.953: 96.4469% ( 15) 00:09:24.891 10369.953 - 10422.593: 96.5753% ( 18) 00:09:24.891 10422.593 - 10475.232: 96.6752% ( 14) 00:09:24.891 10475.232 - 10527.871: 96.7894% ( 16) 00:09:24.891 10527.871 - 10580.511: 96.8964% ( 15) 00:09:24.891 10580.511 - 10633.150: 96.9820% ( 12) 00:09:24.891 10633.150 - 10685.790: 97.0462% ( 9) 00:09:24.891 10685.790 - 10738.429: 97.0962% ( 7) 00:09:24.891 10738.429 - 10791.068: 97.1533% ( 8) 00:09:24.891 10791.068 - 10843.708: 97.2103% ( 8) 00:09:24.891 10843.708 - 10896.347: 97.2674% ( 8) 00:09:24.891 10896.347 - 10948.986: 97.3174% ( 7) 00:09:24.891 10948.986 - 11001.626: 97.3744% ( 8) 00:09:24.891 11001.626 - 11054.265: 97.4315% ( 8) 00:09:24.891 11054.265 - 11106.904: 97.4743% ( 6) 00:09:24.891 11106.904 - 11159.544: 97.5314% ( 8) 00:09:24.891 11159.544 - 11212.183: 97.5885% ( 8) 00:09:24.891 11212.183 - 11264.822: 97.6241% ( 5) 00:09:24.891 11264.822 - 11317.462: 97.6455% ( 3) 00:09:24.891 11317.462 - 11370.101: 97.6812% ( 5) 00:09:24.891 11370.101 - 11422.741: 97.7383% ( 8) 00:09:24.891 11422.741 - 11475.380: 97.7811% ( 6) 00:09:24.891 11475.380 - 11528.019: 97.8025% ( 3) 00:09:24.891 11528.019 - 11580.659: 97.8239% ( 3) 00:09:24.891 11580.659 - 11633.298: 97.8525% ( 4) 00:09:24.891 11633.298 - 11685.937: 97.8810% ( 4) 00:09:24.891 11685.937 - 11738.577: 97.9024% ( 3) 00:09:24.891 11738.577 - 11791.216: 97.9309% ( 4) 00:09:24.891 11791.216 - 11843.855: 97.9595% ( 4) 00:09:24.891 11843.855 - 11896.495: 97.9809% ( 3) 00:09:24.891 11896.495 - 11949.134: 98.0094% ( 4) 00:09:24.891 11949.134 - 12001.773: 98.0380% ( 4) 00:09:24.891 12001.773 - 12054.413: 98.0594% ( 3) 00:09:24.891 12054.413 - 12107.052: 98.0879% ( 4) 00:09:24.891 12107.052 - 12159.692: 98.1093% ( 3) 00:09:24.891 12159.692 - 12212.331: 98.1378% ( 4) 00:09:24.891 12212.331 - 12264.970: 98.1592% ( 3) 00:09:24.891 12264.970 - 12317.610: 98.1735% ( 2) 00:09:24.891 17160.431 - 17265.709: 98.2163% ( 6) 00:09:24.891 17265.709 - 17370.988: 98.2591% ( 6) 00:09:24.891 17370.988 - 17476.267: 98.3091% ( 7) 00:09:24.891 17476.267 - 17581.545: 98.3590% ( 7) 00:09:24.891 17581.545 - 17686.824: 98.4232% ( 9) 00:09:24.891 17686.824 - 17792.103: 98.5160% ( 13) 00:09:24.891 17792.103 - 17897.382: 98.6230% ( 15) 00:09:24.891 17897.382 - 18002.660: 98.7229% ( 14) 00:09:24.891 18002.660 - 18107.939: 98.8156% ( 13) 00:09:24.891 18107.939 - 18213.218: 98.9013% ( 12) 00:09:24.891 18213.218 - 18318.496: 98.9512% ( 7) 00:09:24.891 18318.496 - 18423.775: 99.0083% ( 8) 00:09:24.891 18423.775 - 18529.054: 99.0582% ( 7) 00:09:24.891 18529.054 - 18634.333: 99.0868% ( 4) 00:09:24.891 28004.138 - 28214.696: 99.1510% ( 9) 00:09:24.891 28214.696 - 28425.253: 99.2366% ( 12) 00:09:24.891 28425.253 - 28635.810: 99.3365% ( 14) 00:09:24.891 28635.810 - 28846.368: 99.4364% ( 14) 00:09:24.891 28846.368 - 29056.925: 99.5362% ( 14) 00:09:24.891 29056.925 - 29267.483: 99.5434% ( 1) 00:09:24.891 33899.746 - 34110.304: 99.5505% ( 1) 00:09:24.891 34110.304 - 34320.861: 99.6361% ( 12) 00:09:24.891 34320.861 - 34531.418: 99.7360% ( 14) 00:09:24.891 34531.418 - 34741.976: 99.8288% ( 13) 00:09:24.891 34741.976 - 34952.533: 99.9215% ( 13) 00:09:24.891 34952.533 - 35163.091: 100.0000% ( 11) 00:09:24.891 00:09:24.891 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:24.891 ============================================================================== 00:09:24.891 Range in us Cumulative IO count 00:09:24.891 3868.993 - 3895.312: 0.0497% ( 7) 00:09:24.891 3895.312 - 3921.632: 0.0568% ( 1) 00:09:24.891 3947.952 - 3974.271: 0.0781% ( 3) 00:09:24.891 3974.271 - 4000.591: 0.0994% ( 3) 00:09:24.891 4000.591 - 4026.911: 0.1207% ( 3) 00:09:24.891 4026.911 - 4053.231: 0.1349% ( 2) 00:09:24.891 4053.231 - 4079.550: 0.1634% ( 4) 00:09:24.891 4079.550 - 4105.870: 0.1776% ( 2) 00:09:24.891 4105.870 - 4132.190: 0.1989% ( 3) 00:09:24.891 4132.190 - 4158.509: 0.2131% ( 2) 00:09:24.891 4158.509 - 4184.829: 0.2344% ( 3) 00:09:24.891 4184.829 - 4211.149: 0.2486% ( 2) 00:09:24.891 4211.149 - 4237.468: 0.2628% ( 2) 00:09:24.891 4237.468 - 4263.788: 0.2770% ( 2) 00:09:24.891 4263.788 - 4290.108: 0.2983% ( 3) 00:09:24.891 4290.108 - 4316.427: 0.3125% ( 2) 00:09:24.891 4316.427 - 4342.747: 0.3338% ( 3) 00:09:24.892 4342.747 - 4369.067: 0.3409% ( 1) 00:09:24.892 4369.067 - 4395.386: 0.3622% ( 3) 00:09:24.892 4395.386 - 4421.706: 0.3764% ( 2) 00:09:24.892 4421.706 - 4448.026: 0.3977% ( 3) 00:09:24.892 4448.026 - 4474.345: 0.4119% ( 2) 00:09:24.892 4474.345 - 4500.665: 0.4332% ( 3) 00:09:24.892 4500.665 - 4526.985: 0.4474% ( 2) 00:09:24.892 4526.985 - 4553.304: 0.4545% ( 1) 00:09:24.892 6395.682 - 6422.002: 0.4830% ( 4) 00:09:24.892 6422.002 - 6448.321: 0.5043% ( 3) 00:09:24.892 6448.321 - 6474.641: 0.5327% ( 4) 00:09:24.892 6474.641 - 6500.961: 0.5398% ( 1) 00:09:24.892 6500.961 - 6527.280: 0.5611% ( 3) 00:09:24.892 6553.600 - 6579.920: 0.5966% ( 5) 00:09:24.892 6579.920 - 6606.239: 0.6179% ( 3) 00:09:24.892 6606.239 - 6632.559: 0.6321% ( 2) 00:09:24.892 6632.559 - 6658.879: 0.6463% ( 2) 00:09:24.892 6658.879 - 6685.198: 0.6605% ( 2) 00:09:24.892 6685.198 - 6711.518: 0.6747% ( 2) 00:09:24.892 6711.518 - 6737.838: 0.6960% ( 3) 00:09:24.892 6737.838 - 6790.477: 0.7315% ( 5) 00:09:24.892 6790.477 - 6843.116: 0.7599% ( 4) 00:09:24.892 6843.116 - 6895.756: 0.7884% ( 4) 00:09:24.892 6895.756 - 6948.395: 0.8168% ( 4) 00:09:24.892 6948.395 - 7001.035: 0.8523% ( 5) 00:09:24.892 7001.035 - 7053.674: 0.8878% ( 5) 00:09:24.892 7053.674 - 7106.313: 0.9091% ( 3) 00:09:24.892 7948.543 - 8001.182: 1.0156% ( 15) 00:09:24.892 8001.182 - 8053.822: 1.4134% ( 56) 00:09:24.892 8053.822 - 8106.461: 2.1236% ( 100) 00:09:24.892 8106.461 - 8159.100: 3.1960% ( 151) 00:09:24.892 8159.100 - 8211.740: 4.8082% ( 227) 00:09:24.892 8211.740 - 8264.379: 6.9886% ( 307) 00:09:24.892 8264.379 - 8317.018: 9.5810% ( 365) 00:09:24.892 8317.018 - 8369.658: 12.7983% ( 453) 00:09:24.892 8369.658 - 8422.297: 16.5270% ( 525) 00:09:24.892 8422.297 - 8474.937: 20.4403% ( 551) 00:09:24.892 8474.937 - 8527.576: 24.8366% ( 619) 00:09:24.892 8527.576 - 8580.215: 29.3963% ( 642) 00:09:24.892 8580.215 - 8632.855: 34.1903% ( 675) 00:09:24.892 8632.855 - 8685.494: 39.1548% ( 699) 00:09:24.892 8685.494 - 8738.133: 44.2045% ( 711) 00:09:24.892 8738.133 - 8790.773: 49.4105% ( 733) 00:09:24.892 8790.773 - 8843.412: 54.5739% ( 727) 00:09:24.892 8843.412 - 8896.051: 59.3963% ( 679) 00:09:24.892 8896.051 - 8948.691: 64.0838% ( 660) 00:09:24.892 8948.691 - 9001.330: 68.5866% ( 634) 00:09:24.892 9001.330 - 9053.969: 72.5568% ( 559) 00:09:24.892 9053.969 - 9106.609: 76.0795% ( 496) 00:09:24.892 9106.609 - 9159.248: 79.2401% ( 445) 00:09:24.892 9159.248 - 9211.888: 81.9318% ( 379) 00:09:24.892 9211.888 - 9264.527: 84.3324% ( 338) 00:09:24.892 9264.527 - 9317.166: 86.3139% ( 279) 00:09:24.892 9317.166 - 9369.806: 87.9261% ( 227) 00:09:24.892 9369.806 - 9422.445: 89.2614% ( 188) 00:09:24.892 9422.445 - 9475.084: 90.4332% ( 165) 00:09:24.892 9475.084 - 9527.724: 91.3494% ( 129) 00:09:24.892 9527.724 - 9580.363: 92.1094% ( 107) 00:09:24.892 9580.363 - 9633.002: 92.7841% ( 95) 00:09:24.892 9633.002 - 9685.642: 93.4659% ( 96) 00:09:24.892 9685.642 - 9738.281: 93.9844% ( 73) 00:09:24.892 9738.281 - 9790.920: 94.4673% ( 68) 00:09:24.892 9790.920 - 9843.560: 94.8580% ( 55) 00:09:24.892 9843.560 - 9896.199: 95.1562% ( 42) 00:09:24.892 9896.199 - 9948.839: 95.3622% ( 29) 00:09:24.892 9948.839 - 10001.478: 95.5469% ( 26) 00:09:24.892 10001.478 - 10054.117: 95.6889% ( 20) 00:09:24.892 10054.117 - 10106.757: 95.8168% ( 18) 00:09:24.892 10106.757 - 10159.396: 95.9304% ( 16) 00:09:24.892 10159.396 - 10212.035: 96.0298% ( 14) 00:09:24.892 10212.035 - 10264.675: 96.1506% ( 17) 00:09:24.892 10264.675 - 10317.314: 96.2216% ( 10) 00:09:24.892 10317.314 - 10369.953: 96.2926% ( 10) 00:09:24.892 10369.953 - 10422.593: 96.3778% ( 12) 00:09:24.892 10422.593 - 10475.232: 96.4276% ( 7) 00:09:24.892 10475.232 - 10527.871: 96.4631% ( 5) 00:09:24.892 10527.871 - 10580.511: 96.4915% ( 4) 00:09:24.892 10580.511 - 10633.150: 96.5128% ( 3) 00:09:24.892 10633.150 - 10685.790: 96.5412% ( 4) 00:09:24.892 10685.790 - 10738.429: 96.5838% ( 6) 00:09:24.892 10738.429 - 10791.068: 96.6690% ( 12) 00:09:24.892 10791.068 - 10843.708: 96.7401% ( 10) 00:09:24.892 10843.708 - 10896.347: 96.8395% ( 14) 00:09:24.892 10896.347 - 10948.986: 96.9247% ( 12) 00:09:24.892 10948.986 - 11001.626: 97.0028% ( 11) 00:09:24.892 11001.626 - 11054.265: 97.0881% ( 12) 00:09:24.892 11054.265 - 11106.904: 97.1804% ( 13) 00:09:24.892 11106.904 - 11159.544: 97.2585% ( 11) 00:09:24.892 11159.544 - 11212.183: 97.3509% ( 13) 00:09:24.892 11212.183 - 11264.822: 97.4432% ( 13) 00:09:24.892 11264.822 - 11317.462: 97.5355% ( 13) 00:09:24.892 11317.462 - 11370.101: 97.6136% ( 11) 00:09:24.892 11370.101 - 11422.741: 97.6989% ( 12) 00:09:24.892 11422.741 - 11475.380: 97.7770% ( 11) 00:09:24.892 11475.380 - 11528.019: 97.8551% ( 11) 00:09:24.892 11528.019 - 11580.659: 97.9119% ( 8) 00:09:24.892 11580.659 - 11633.298: 97.9332% ( 3) 00:09:24.892 11633.298 - 11685.937: 97.9616% ( 4) 00:09:24.892 11685.937 - 11738.577: 97.9901% ( 4) 00:09:24.892 11738.577 - 11791.216: 98.0185% ( 4) 00:09:24.892 11791.216 - 11843.855: 98.0398% ( 3) 00:09:24.892 11843.855 - 11896.495: 98.0682% ( 4) 00:09:24.892 11896.495 - 11949.134: 98.0966% ( 4) 00:09:24.892 11949.134 - 12001.773: 98.1179% ( 3) 00:09:24.892 12001.773 - 12054.413: 98.1392% ( 3) 00:09:24.892 12054.413 - 12107.052: 98.1605% ( 3) 00:09:24.892 12107.052 - 12159.692: 98.1818% ( 3) 00:09:24.892 17265.709 - 17370.988: 98.2244% ( 6) 00:09:24.892 17370.988 - 17476.267: 98.2741% ( 7) 00:09:24.892 17476.267 - 17581.545: 98.3239% ( 7) 00:09:24.892 17581.545 - 17686.824: 98.3807% ( 8) 00:09:24.892 17686.824 - 17792.103: 98.4659% ( 12) 00:09:24.892 17792.103 - 17897.382: 98.5653% ( 14) 00:09:24.892 17897.382 - 18002.660: 98.6577% ( 13) 00:09:24.892 18002.660 - 18107.939: 98.7500% ( 13) 00:09:24.892 18107.939 - 18213.218: 98.8494% ( 14) 00:09:24.892 18213.218 - 18318.496: 98.8991% ( 7) 00:09:24.892 18318.496 - 18423.775: 98.9418% ( 6) 00:09:24.892 18423.775 - 18529.054: 98.9844% ( 6) 00:09:24.892 18529.054 - 18634.333: 99.0341% ( 7) 00:09:24.892 18634.333 - 18739.611: 99.0838% ( 7) 00:09:24.892 18739.611 - 18844.890: 99.0909% ( 1) 00:09:24.892 21792.694 - 21897.973: 99.0980% ( 1) 00:09:24.892 21897.973 - 22003.251: 99.1477% ( 7) 00:09:24.892 22003.251 - 22108.530: 99.1974% ( 7) 00:09:24.892 22108.530 - 22213.809: 99.2472% ( 7) 00:09:24.892 22213.809 - 22319.088: 99.2969% ( 7) 00:09:24.892 22319.088 - 22424.366: 99.3466% ( 7) 00:09:24.892 22424.366 - 22529.645: 99.3963% ( 7) 00:09:24.892 22529.645 - 22634.924: 99.4460% ( 7) 00:09:24.892 22634.924 - 22740.202: 99.4957% ( 7) 00:09:24.892 22740.202 - 22845.481: 99.5384% ( 6) 00:09:24.892 22845.481 - 22950.760: 99.5455% ( 1) 00:09:24.892 28214.696 - 28425.253: 99.5597% ( 2) 00:09:24.892 28425.253 - 28635.810: 99.6520% ( 13) 00:09:24.892 28635.810 - 28846.368: 99.7514% ( 14) 00:09:24.892 28846.368 - 29056.925: 99.8509% ( 14) 00:09:24.892 29056.925 - 29267.483: 99.9574% ( 15) 00:09:24.892 29267.483 - 29478.040: 100.0000% ( 6) 00:09:24.892 00:09:24.892 09:35:02 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:26.273 Initializing NVMe Controllers 00:09:26.273 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.273 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.273 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.273 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.273 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:26.273 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:26.273 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:26.273 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:26.273 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:26.273 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:26.273 Initialization complete. Launching workers. 00:09:26.273 ======================================================== 00:09:26.273 Latency(us) 00:09:26.273 Device Information : IOPS MiB/s Average min max 00:09:26.273 PCIE (0000:00:10.0) NSID 1 from core 0: 10404.03 121.92 12312.34 7426.58 36276.77 00:09:26.273 PCIE (0000:00:11.0) NSID 1 from core 0: 10404.03 121.92 12303.45 7429.17 35680.46 00:09:26.273 PCIE (0000:00:13.0) NSID 1 from core 0: 10404.03 121.92 12294.87 6116.09 36333.45 00:09:26.273 PCIE (0000:00:12.0) NSID 1 from core 0: 10404.03 121.92 12286.28 6012.13 35893.80 00:09:26.273 PCIE (0000:00:12.0) NSID 2 from core 0: 10404.03 121.92 12277.75 5973.75 35597.52 00:09:26.274 PCIE (0000:00:12.0) NSID 3 from core 0: 10467.86 122.67 12194.00 5849.15 28895.63 00:09:26.274 ======================================================== 00:09:26.274 Total : 62488.03 732.28 12278.03 5849.15 36333.45 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8159.100us 00:09:26.274 10.00000% : 9264.527us 00:09:26.274 25.00000% : 9896.199us 00:09:26.274 50.00000% : 12107.052us 00:09:26.274 75.00000% : 13370.397us 00:09:26.274 90.00000% : 16634.037us 00:09:26.274 95.00000% : 17686.824us 00:09:26.274 98.00000% : 19792.398us 00:09:26.274 99.00000% : 27161.908us 00:09:26.274 99.50000% : 34952.533us 00:09:26.274 99.90000% : 36215.878us 00:09:26.274 99.99000% : 36426.435us 00:09:26.274 99.99900% : 36426.435us 00:09:26.274 99.99990% : 36426.435us 00:09:26.274 99.99999% : 36426.435us 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8106.461us 00:09:26.274 10.00000% : 9317.166us 00:09:26.274 25.00000% : 9843.560us 00:09:26.274 50.00000% : 12054.413us 00:09:26.274 75.00000% : 13370.397us 00:09:26.274 90.00000% : 16634.037us 00:09:26.274 95.00000% : 17792.103us 00:09:26.274 98.00000% : 19581.841us 00:09:26.274 99.00000% : 26846.072us 00:09:26.274 99.50000% : 34952.533us 00:09:26.274 99.90000% : 35584.206us 00:09:26.274 99.99000% : 35794.763us 00:09:26.274 99.99900% : 35794.763us 00:09:26.274 99.99990% : 35794.763us 00:09:26.274 99.99999% : 35794.763us 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8317.018us 00:09:26.274 10.00000% : 9264.527us 00:09:26.274 25.00000% : 9896.199us 00:09:26.274 50.00000% : 12107.052us 00:09:26.274 75.00000% : 13370.397us 00:09:26.274 90.00000% : 16107.643us 00:09:26.274 95.00000% : 18213.218us 00:09:26.274 98.00000% : 19266.005us 00:09:26.274 99.00000% : 28004.138us 00:09:26.274 99.50000% : 35584.206us 00:09:26.274 99.90000% : 36215.878us 00:09:26.274 99.99000% : 36426.435us 00:09:26.274 99.99900% : 36426.435us 00:09:26.274 99.99990% : 36426.435us 00:09:26.274 99.99999% : 36426.435us 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8317.018us 00:09:26.274 10.00000% : 9211.888us 00:09:26.274 25.00000% : 9896.199us 00:09:26.274 50.00000% : 12159.692us 00:09:26.274 75.00000% : 13370.397us 00:09:26.274 90.00000% : 16002.365us 00:09:26.274 95.00000% : 18107.939us 00:09:26.274 98.00000% : 19687.120us 00:09:26.274 99.00000% : 27793.581us 00:09:26.274 99.50000% : 35163.091us 00:09:26.274 99.90000% : 35794.763us 00:09:26.274 99.99000% : 36005.320us 00:09:26.274 99.99900% : 36005.320us 00:09:26.274 99.99990% : 36005.320us 00:09:26.274 99.99999% : 36005.320us 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8159.100us 00:09:26.274 10.00000% : 9317.166us 00:09:26.274 25.00000% : 9896.199us 00:09:26.274 50.00000% : 12159.692us 00:09:26.274 75.00000% : 13265.118us 00:09:26.274 90.00000% : 16107.643us 00:09:26.274 95.00000% : 17897.382us 00:09:26.274 98.00000% : 19687.120us 00:09:26.274 99.00000% : 27793.581us 00:09:26.274 99.50000% : 34741.976us 00:09:26.274 99.90000% : 35584.206us 00:09:26.274 99.99000% : 35584.206us 00:09:26.274 99.99900% : 35794.763us 00:09:26.274 99.99990% : 35794.763us 00:09:26.274 99.99999% : 35794.763us 00:09:26.274 00:09:26.274 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:26.274 ================================================================================= 00:09:26.274 1.00000% : 8053.822us 00:09:26.274 10.00000% : 9317.166us 00:09:26.274 25.00000% : 9843.560us 00:09:26.274 50.00000% : 12212.331us 00:09:26.274 75.00000% : 13317.757us 00:09:26.274 90.00000% : 16528.758us 00:09:26.274 95.00000% : 17581.545us 00:09:26.274 98.00000% : 19371.284us 00:09:26.274 99.00000% : 21055.743us 00:09:26.274 99.50000% : 28004.138us 00:09:26.274 99.90000% : 28846.368us 00:09:26.274 99.99000% : 29056.925us 00:09:26.274 99.99900% : 29056.925us 00:09:26.274 99.99990% : 29056.925us 00:09:26.274 99.99999% : 29056.925us 00:09:26.274 00:09:26.274 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:26.274 ============================================================================== 00:09:26.274 Range in us Cumulative IO count 00:09:26.274 7422.149 - 7474.789: 0.0383% ( 4) 00:09:26.274 7474.789 - 7527.428: 0.0959% ( 6) 00:09:26.274 7527.428 - 7580.067: 0.1534% ( 6) 00:09:26.274 7580.067 - 7632.707: 0.2109% ( 6) 00:09:26.274 7632.707 - 7685.346: 0.2780% ( 7) 00:09:26.274 7685.346 - 7737.986: 0.4601% ( 19) 00:09:26.274 7737.986 - 7790.625: 0.5272% ( 7) 00:09:26.274 7790.625 - 7843.264: 0.5752% ( 5) 00:09:26.274 7843.264 - 7895.904: 0.5847% ( 1) 00:09:26.274 7895.904 - 7948.543: 0.6327% ( 5) 00:09:26.274 7948.543 - 8001.182: 0.7669% ( 14) 00:09:26.274 8001.182 - 8053.822: 0.9298% ( 17) 00:09:26.274 8053.822 - 8106.461: 0.9969% ( 7) 00:09:26.274 8106.461 - 8159.100: 1.0640% ( 7) 00:09:26.274 8159.100 - 8211.740: 1.1024% ( 4) 00:09:26.274 8211.740 - 8264.379: 1.1215% ( 2) 00:09:26.274 8264.379 - 8317.018: 1.1982% ( 8) 00:09:26.274 8317.018 - 8369.658: 1.2366% ( 4) 00:09:26.274 8369.658 - 8422.297: 1.2845% ( 5) 00:09:26.274 8422.297 - 8474.937: 1.3708% ( 9) 00:09:26.274 8474.937 - 8527.576: 1.4571% ( 9) 00:09:26.274 8527.576 - 8580.215: 1.6200% ( 17) 00:09:26.274 8580.215 - 8632.855: 1.9172% ( 31) 00:09:26.274 8632.855 - 8685.494: 2.3869% ( 49) 00:09:26.274 8685.494 - 8738.133: 2.9620% ( 60) 00:09:26.274 8738.133 - 8790.773: 3.7193% ( 79) 00:09:26.274 8790.773 - 8843.412: 4.3232% ( 63) 00:09:26.274 8843.412 - 8896.051: 4.8409% ( 54) 00:09:26.274 8896.051 - 8948.691: 5.3777% ( 56) 00:09:26.274 8948.691 - 9001.330: 6.1733% ( 83) 00:09:26.274 9001.330 - 9053.969: 6.7581% ( 61) 00:09:26.274 9053.969 - 9106.609: 7.3524% ( 62) 00:09:26.274 9106.609 - 9159.248: 8.1576% ( 84) 00:09:26.274 9159.248 - 9211.888: 9.1449% ( 103) 00:09:26.274 9211.888 - 9264.527: 10.4965% ( 141) 00:09:26.274 9264.527 - 9317.166: 11.7619% ( 132) 00:09:26.274 9317.166 - 9369.806: 12.7013% ( 98) 00:09:26.274 9369.806 - 9422.445: 13.7558% ( 110) 00:09:26.274 9422.445 - 9475.084: 14.9540% ( 125) 00:09:26.274 9475.084 - 9527.724: 16.3919% ( 150) 00:09:26.274 9527.724 - 9580.363: 17.9831% ( 166) 00:09:26.274 9580.363 - 9633.002: 19.3923% ( 147) 00:09:26.274 9633.002 - 9685.642: 20.6672% ( 133) 00:09:26.274 9685.642 - 9738.281: 21.9709% ( 136) 00:09:26.274 9738.281 - 9790.920: 23.2841% ( 137) 00:09:26.274 9790.920 - 9843.560: 24.6549% ( 143) 00:09:26.274 9843.560 - 9896.199: 25.6902% ( 108) 00:09:26.274 9896.199 - 9948.839: 26.6008% ( 95) 00:09:26.274 9948.839 - 10001.478: 28.2688% ( 174) 00:09:26.274 10001.478 - 10054.117: 29.8984% ( 170) 00:09:26.274 10054.117 - 10106.757: 31.6143% ( 179) 00:09:26.274 10106.757 - 10159.396: 32.6591% ( 109) 00:09:26.274 10159.396 - 10212.035: 33.9053% ( 130) 00:09:26.274 10212.035 - 10264.675: 34.8447% ( 98) 00:09:26.274 10264.675 - 10317.314: 35.6116% ( 80) 00:09:26.274 10317.314 - 10369.953: 36.2251% ( 64) 00:09:26.274 10369.953 - 10422.593: 37.0495% ( 86) 00:09:26.274 10422.593 - 10475.232: 37.7588% ( 74) 00:09:26.274 10475.232 - 10527.871: 38.4202% ( 69) 00:09:26.274 10527.871 - 10580.511: 39.0817% ( 69) 00:09:26.274 10580.511 - 10633.150: 39.5514% ( 49) 00:09:26.274 10633.150 - 10685.790: 40.1265% ( 60) 00:09:26.274 10685.790 - 10738.429: 40.6346% ( 53) 00:09:26.274 10738.429 - 10791.068: 41.0947% ( 48) 00:09:26.274 10791.068 - 10843.708: 41.4781% ( 40) 00:09:26.274 10843.708 - 10896.347: 41.8424% ( 38) 00:09:26.274 10896.347 - 10948.986: 42.1779% ( 35) 00:09:26.274 10948.986 - 11001.626: 42.7339% ( 58) 00:09:26.274 11001.626 - 11054.265: 43.0215% ( 30) 00:09:26.274 11054.265 - 11106.904: 43.3282% ( 32) 00:09:26.274 11106.904 - 11159.544: 43.6541% ( 34) 00:09:26.274 11159.544 - 11212.183: 43.9225% ( 28) 00:09:26.274 11212.183 - 11264.822: 44.1238% ( 21) 00:09:26.274 11264.822 - 11317.462: 44.4306% ( 32) 00:09:26.274 11317.462 - 11370.101: 44.6990% ( 28) 00:09:26.274 11370.101 - 11422.741: 45.0441% ( 36) 00:09:26.274 11422.741 - 11475.380: 45.4275% ( 40) 00:09:26.274 11475.380 - 11528.019: 45.7439% ( 33) 00:09:26.274 11528.019 - 11580.659: 46.1177% ( 39) 00:09:26.274 11580.659 - 11633.298: 46.4053% ( 30) 00:09:26.274 11633.298 - 11685.937: 46.6833% ( 29) 00:09:26.274 11685.937 - 11738.577: 46.9421% ( 27) 00:09:26.274 11738.577 - 11791.216: 47.1722% ( 24) 00:09:26.274 11791.216 - 11843.855: 47.4214% ( 26) 00:09:26.274 11843.855 - 11896.495: 47.7952% ( 39) 00:09:26.274 11896.495 - 11949.134: 48.5046% ( 74) 00:09:26.274 11949.134 - 12001.773: 49.0414% ( 56) 00:09:26.275 12001.773 - 12054.413: 49.7220% ( 71) 00:09:26.275 12054.413 - 12107.052: 50.4889% ( 80) 00:09:26.275 12107.052 - 12159.692: 51.8501% ( 142) 00:09:26.275 12159.692 - 12212.331: 53.4797% ( 170) 00:09:26.275 12212.331 - 12264.970: 54.6971% ( 127) 00:09:26.275 12264.970 - 12317.610: 55.5982% ( 94) 00:09:26.275 12317.610 - 12370.249: 56.6718% ( 112) 00:09:26.275 12370.249 - 12422.888: 57.9946% ( 138) 00:09:26.275 12422.888 - 12475.528: 58.9245% ( 97) 00:09:26.275 12475.528 - 12528.167: 59.9118% ( 103) 00:09:26.275 12528.167 - 12580.806: 60.8896% ( 102) 00:09:26.275 12580.806 - 12633.446: 61.9919% ( 115) 00:09:26.275 12633.446 - 12686.085: 63.0176% ( 107) 00:09:26.275 12686.085 - 12738.724: 63.9475% ( 97) 00:09:26.275 12738.724 - 12791.364: 65.0594% ( 116) 00:09:26.275 12791.364 - 12844.003: 66.2960% ( 129) 00:09:26.275 12844.003 - 12896.643: 67.4080% ( 116) 00:09:26.275 12896.643 - 12949.282: 68.5583% ( 120) 00:09:26.275 12949.282 - 13001.921: 69.5840% ( 107) 00:09:26.275 13001.921 - 13054.561: 70.6384% ( 110) 00:09:26.275 13054.561 - 13107.200: 71.5778% ( 98) 00:09:26.275 13107.200 - 13159.839: 72.4502% ( 91) 00:09:26.275 13159.839 - 13212.479: 73.2074% ( 79) 00:09:26.275 13212.479 - 13265.118: 73.9360% ( 76) 00:09:26.275 13265.118 - 13317.757: 74.5974% ( 69) 00:09:26.275 13317.757 - 13370.397: 75.1917% ( 62) 00:09:26.275 13370.397 - 13423.036: 75.7094% ( 54) 00:09:26.275 13423.036 - 13475.676: 76.2270% ( 54) 00:09:26.275 13475.676 - 13580.954: 76.8692% ( 67) 00:09:26.275 13580.954 - 13686.233: 77.6553% ( 82) 00:09:26.275 13686.233 - 13791.512: 78.2304% ( 60) 00:09:26.275 13791.512 - 13896.790: 78.8823% ( 68) 00:09:26.275 13896.790 - 14002.069: 79.5054% ( 65) 00:09:26.275 14002.069 - 14107.348: 80.1860% ( 71) 00:09:26.275 14107.348 - 14212.627: 80.8282% ( 67) 00:09:26.275 14212.627 - 14317.905: 81.4034% ( 60) 00:09:26.275 14317.905 - 14423.184: 82.0169% ( 64) 00:09:26.275 14423.184 - 14528.463: 82.6400% ( 65) 00:09:26.275 14528.463 - 14633.741: 83.0426% ( 42) 00:09:26.275 14633.741 - 14739.020: 83.5985% ( 58) 00:09:26.275 14739.020 - 14844.299: 83.8957% ( 31) 00:09:26.275 14844.299 - 14949.578: 84.2120% ( 33) 00:09:26.275 14949.578 - 15054.856: 84.5380% ( 34) 00:09:26.275 15054.856 - 15160.135: 84.8639% ( 34) 00:09:26.275 15160.135 - 15265.414: 85.1419% ( 29) 00:09:26.275 15265.414 - 15370.692: 85.5061% ( 38) 00:09:26.275 15370.692 - 15475.971: 85.8608% ( 37) 00:09:26.275 15475.971 - 15581.250: 86.2634% ( 42) 00:09:26.275 15581.250 - 15686.529: 86.5798% ( 33) 00:09:26.275 15686.529 - 15791.807: 87.0974% ( 54) 00:09:26.275 15791.807 - 15897.086: 87.5959% ( 52) 00:09:26.275 15897.086 - 16002.365: 87.9026% ( 32) 00:09:26.275 16002.365 - 16107.643: 88.1902% ( 30) 00:09:26.275 16107.643 - 16212.922: 88.4778% ( 30) 00:09:26.275 16212.922 - 16318.201: 88.8324% ( 37) 00:09:26.275 16318.201 - 16423.480: 89.2638% ( 45) 00:09:26.275 16423.480 - 16528.758: 89.7335% ( 49) 00:09:26.275 16528.758 - 16634.037: 90.2416% ( 53) 00:09:26.275 16634.037 - 16739.316: 90.7209% ( 50) 00:09:26.275 16739.316 - 16844.594: 91.2002% ( 50) 00:09:26.275 16844.594 - 16949.873: 91.7274% ( 55) 00:09:26.275 16949.873 - 17055.152: 92.4942% ( 80) 00:09:26.275 17055.152 - 17160.431: 93.1557% ( 69) 00:09:26.275 17160.431 - 17265.709: 93.7979% ( 67) 00:09:26.275 17265.709 - 17370.988: 94.2772% ( 50) 00:09:26.275 17370.988 - 17476.267: 94.6511% ( 39) 00:09:26.275 17476.267 - 17581.545: 94.9962% ( 36) 00:09:26.275 17581.545 - 17686.824: 95.3029% ( 32) 00:09:26.275 17686.824 - 17792.103: 95.5809% ( 29) 00:09:26.275 17792.103 - 17897.382: 95.8014% ( 23) 00:09:26.275 17897.382 - 18002.660: 95.9931% ( 20) 00:09:26.275 18002.660 - 18107.939: 96.1465% ( 16) 00:09:26.275 18107.939 - 18213.218: 96.3957% ( 26) 00:09:26.275 18213.218 - 18318.496: 96.7025% ( 32) 00:09:26.275 18318.496 - 18423.775: 96.9709% ( 28) 00:09:26.275 18423.775 - 18529.054: 97.1242% ( 16) 00:09:26.275 18529.054 - 18634.333: 97.2297% ( 11) 00:09:26.275 18634.333 - 18739.611: 97.3447% ( 12) 00:09:26.275 18739.611 - 18844.890: 97.4789% ( 14) 00:09:26.275 18844.890 - 18950.169: 97.5652% ( 9) 00:09:26.275 18950.169 - 19055.447: 97.6706% ( 11) 00:09:26.275 19055.447 - 19160.726: 97.7377% ( 7) 00:09:26.275 19160.726 - 19266.005: 97.7952% ( 6) 00:09:26.275 19266.005 - 19371.284: 97.8048% ( 1) 00:09:26.275 19371.284 - 19476.562: 97.8432% ( 4) 00:09:26.275 19476.562 - 19581.841: 97.8911% ( 5) 00:09:26.275 19581.841 - 19687.120: 97.9870% ( 10) 00:09:26.275 19687.120 - 19792.398: 98.0445% ( 6) 00:09:26.275 19792.398 - 19897.677: 98.0732% ( 3) 00:09:26.275 19897.677 - 20002.956: 98.0924% ( 2) 00:09:26.275 20108.235 - 20213.513: 98.1116% ( 2) 00:09:26.275 20213.513 - 20318.792: 98.1403% ( 3) 00:09:26.275 20318.792 - 20424.071: 98.1595% ( 2) 00:09:26.275 21266.300 - 21371.579: 98.1787% ( 2) 00:09:26.275 21371.579 - 21476.858: 98.2074% ( 3) 00:09:26.275 21476.858 - 21582.137: 98.2362% ( 3) 00:09:26.275 21582.137 - 21687.415: 98.2650% ( 3) 00:09:26.275 21687.415 - 21792.694: 98.2937% ( 3) 00:09:26.275 21792.694 - 21897.973: 98.3800% ( 9) 00:09:26.275 21897.973 - 22003.251: 98.5142% ( 14) 00:09:26.275 22003.251 - 22108.530: 98.6484% ( 14) 00:09:26.275 22108.530 - 22213.809: 98.7347% ( 9) 00:09:26.275 22424.366 - 22529.645: 98.7442% ( 1) 00:09:26.275 22529.645 - 22634.924: 98.7730% ( 3) 00:09:26.275 26530.236 - 26635.515: 98.7826% ( 1) 00:09:26.275 26740.794 - 26846.072: 98.8113% ( 3) 00:09:26.275 26846.072 - 26951.351: 98.9647% ( 16) 00:09:26.275 26951.351 - 27161.908: 99.1469% ( 19) 00:09:26.275 27161.908 - 27372.466: 99.2811% ( 14) 00:09:26.275 27372.466 - 27583.023: 99.3002% ( 2) 00:09:26.275 27583.023 - 27793.581: 99.3865% ( 9) 00:09:26.275 34531.418 - 34741.976: 99.4344% ( 5) 00:09:26.275 34741.976 - 34952.533: 99.5303% ( 10) 00:09:26.275 34952.533 - 35163.091: 99.6262% ( 10) 00:09:26.275 35163.091 - 35373.648: 99.7028% ( 8) 00:09:26.275 35584.206 - 35794.763: 99.7891% ( 9) 00:09:26.275 35794.763 - 36005.320: 99.8850% ( 10) 00:09:26.275 36005.320 - 36215.878: 99.9808% ( 10) 00:09:26.275 36215.878 - 36426.435: 100.0000% ( 2) 00:09:26.275 00:09:26.275 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:26.275 ============================================================================== 00:09:26.275 Range in us Cumulative IO count 00:09:26.275 7422.149 - 7474.789: 0.0575% ( 6) 00:09:26.275 7474.789 - 7527.428: 0.1438% ( 9) 00:09:26.275 7527.428 - 7580.067: 0.2205% ( 8) 00:09:26.275 7580.067 - 7632.707: 0.3067% ( 9) 00:09:26.275 7632.707 - 7685.346: 0.4026% ( 10) 00:09:26.275 7685.346 - 7737.986: 0.4601% ( 6) 00:09:26.275 7737.986 - 7790.625: 0.5368% ( 8) 00:09:26.275 7790.625 - 7843.264: 0.6231% ( 9) 00:09:26.275 7895.904 - 7948.543: 0.6327% ( 1) 00:09:26.275 7948.543 - 8001.182: 0.6806% ( 5) 00:09:26.275 8001.182 - 8053.822: 0.7956% ( 12) 00:09:26.275 8053.822 - 8106.461: 1.1695% ( 39) 00:09:26.275 8106.461 - 8159.100: 1.2078% ( 4) 00:09:26.275 8159.100 - 8211.740: 1.2270% ( 2) 00:09:26.275 8211.740 - 8264.379: 1.2366% ( 1) 00:09:26.275 8317.018 - 8369.658: 1.2653% ( 3) 00:09:26.275 8369.658 - 8422.297: 1.2941% ( 3) 00:09:26.275 8422.297 - 8474.937: 1.4379% ( 15) 00:09:26.275 8474.937 - 8527.576: 1.6392% ( 21) 00:09:26.275 8527.576 - 8580.215: 2.0993% ( 48) 00:09:26.275 8580.215 - 8632.855: 2.5403% ( 46) 00:09:26.275 8632.855 - 8685.494: 2.9237% ( 40) 00:09:26.275 8685.494 - 8738.133: 3.6522% ( 76) 00:09:26.275 8738.133 - 8790.773: 4.2657% ( 64) 00:09:26.275 8790.773 - 8843.412: 4.9271% ( 69) 00:09:26.275 8843.412 - 8896.051: 5.4064% ( 50) 00:09:26.275 8896.051 - 8948.691: 5.7899% ( 40) 00:09:26.275 8948.691 - 9001.330: 6.2404% ( 47) 00:09:26.275 9001.330 - 9053.969: 6.6814% ( 46) 00:09:26.275 9053.969 - 9106.609: 7.1319% ( 47) 00:09:26.275 9106.609 - 9159.248: 7.8796% ( 78) 00:09:26.275 9159.248 - 9211.888: 8.6561% ( 81) 00:09:26.275 9211.888 - 9264.527: 9.4613% ( 84) 00:09:26.275 9264.527 - 9317.166: 10.3911% ( 97) 00:09:26.275 9317.166 - 9369.806: 11.6948% ( 136) 00:09:26.275 9369.806 - 9422.445: 12.8547% ( 121) 00:09:26.275 9422.445 - 9475.084: 14.4268% ( 164) 00:09:26.275 9475.084 - 9527.724: 15.7592% ( 139) 00:09:26.275 9527.724 - 9580.363: 17.1683% ( 147) 00:09:26.275 9580.363 - 9633.002: 19.1143% ( 203) 00:09:26.275 9633.002 - 9685.642: 20.4467% ( 139) 00:09:26.275 9685.642 - 9738.281: 22.0667% ( 169) 00:09:26.275 9738.281 - 9790.920: 23.4183% ( 141) 00:09:26.275 9790.920 - 9843.560: 25.0000% ( 165) 00:09:26.275 9843.560 - 9896.199: 26.4571% ( 152) 00:09:26.275 9896.199 - 9948.839: 27.7895% ( 139) 00:09:26.275 9948.839 - 10001.478: 29.1699% ( 144) 00:09:26.275 10001.478 - 10054.117: 30.8762% ( 178) 00:09:26.275 10054.117 - 10106.757: 32.2757% ( 146) 00:09:26.275 10106.757 - 10159.396: 33.6273% ( 141) 00:09:26.275 10159.396 - 10212.035: 34.7009% ( 112) 00:09:26.275 10212.035 - 10264.675: 36.1771% ( 154) 00:09:26.275 10264.675 - 10317.314: 36.9440% ( 80) 00:09:26.275 10317.314 - 10369.953: 37.4617% ( 54) 00:09:26.275 10369.953 - 10422.593: 37.9410% ( 50) 00:09:26.275 10422.593 - 10475.232: 38.6311% ( 72) 00:09:26.275 10475.232 - 10527.871: 38.9858% ( 37) 00:09:26.276 10527.871 - 10580.511: 39.4747% ( 51) 00:09:26.276 10580.511 - 10633.150: 40.0115% ( 56) 00:09:26.276 10633.150 - 10685.790: 40.4237% ( 43) 00:09:26.276 10685.790 - 10738.429: 40.8838% ( 48) 00:09:26.276 10738.429 - 10791.068: 41.4494% ( 59) 00:09:26.276 10791.068 - 10843.708: 42.0150% ( 59) 00:09:26.276 10843.708 - 10896.347: 42.5134% ( 52) 00:09:26.276 10896.347 - 10948.986: 42.8777% ( 38) 00:09:26.276 10948.986 - 11001.626: 43.4337% ( 58) 00:09:26.276 11001.626 - 11054.265: 43.8171% ( 40) 00:09:26.276 11054.265 - 11106.904: 44.2772% ( 48) 00:09:26.276 11106.904 - 11159.544: 44.5265% ( 26) 00:09:26.276 11159.544 - 11212.183: 44.9387% ( 43) 00:09:26.276 11212.183 - 11264.822: 45.2837% ( 36) 00:09:26.276 11264.822 - 11317.462: 45.6001% ( 33) 00:09:26.276 11317.462 - 11370.101: 45.9068% ( 32) 00:09:26.276 11370.101 - 11422.741: 46.2040% ( 31) 00:09:26.276 11422.741 - 11475.380: 46.4149% ( 22) 00:09:26.276 11475.380 - 11528.019: 46.7408% ( 34) 00:09:26.276 11528.019 - 11580.659: 47.0380% ( 31) 00:09:26.276 11580.659 - 11633.298: 47.3160% ( 29) 00:09:26.276 11633.298 - 11685.937: 47.6131% ( 31) 00:09:26.276 11685.937 - 11738.577: 47.9678% ( 37) 00:09:26.276 11738.577 - 11791.216: 48.3896% ( 44) 00:09:26.276 11791.216 - 11843.855: 48.7251% ( 35) 00:09:26.276 11843.855 - 11896.495: 48.9935% ( 28) 00:09:26.276 11896.495 - 11949.134: 49.3194% ( 34) 00:09:26.276 11949.134 - 12001.773: 49.7891% ( 49) 00:09:26.276 12001.773 - 12054.413: 50.2492% ( 48) 00:09:26.276 12054.413 - 12107.052: 50.5943% ( 36) 00:09:26.276 12107.052 - 12159.692: 51.1311% ( 56) 00:09:26.276 12159.692 - 12212.331: 51.6967% ( 59) 00:09:26.276 12212.331 - 12264.970: 52.6457% ( 99) 00:09:26.276 12264.970 - 12317.610: 53.8919% ( 130) 00:09:26.276 12317.610 - 12370.249: 55.1668% ( 133) 00:09:26.276 12370.249 - 12422.888: 56.5376% ( 143) 00:09:26.276 12422.888 - 12475.528: 57.8988% ( 142) 00:09:26.276 12475.528 - 12528.167: 59.5475% ( 172) 00:09:26.276 12528.167 - 12580.806: 61.0046% ( 152) 00:09:26.276 12580.806 - 12633.446: 62.4425% ( 150) 00:09:26.276 12633.446 - 12686.085: 63.7558% ( 137) 00:09:26.276 12686.085 - 12738.724: 64.9252% ( 122) 00:09:26.276 12738.724 - 12791.364: 66.0372% ( 116) 00:09:26.276 12791.364 - 12844.003: 67.1108% ( 112) 00:09:26.276 12844.003 - 12896.643: 68.1365% ( 107) 00:09:26.276 12896.643 - 12949.282: 69.2389% ( 115) 00:09:26.276 12949.282 - 13001.921: 70.2933% ( 110) 00:09:26.276 13001.921 - 13054.561: 71.2232% ( 97) 00:09:26.276 13054.561 - 13107.200: 72.1338% ( 95) 00:09:26.276 13107.200 - 13159.839: 73.2458% ( 116) 00:09:26.276 13159.839 - 13212.479: 74.0702% ( 86) 00:09:26.276 13212.479 - 13265.118: 74.5878% ( 54) 00:09:26.276 13265.118 - 13317.757: 74.9712% ( 40) 00:09:26.276 13317.757 - 13370.397: 75.3163% ( 36) 00:09:26.276 13370.397 - 13423.036: 75.6423% ( 34) 00:09:26.276 13423.036 - 13475.676: 75.9873% ( 36) 00:09:26.276 13475.676 - 13580.954: 76.7734% ( 82) 00:09:26.276 13580.954 - 13686.233: 77.5498% ( 81) 00:09:26.276 13686.233 - 13791.512: 78.4317% ( 92) 00:09:26.276 13791.512 - 13896.790: 79.2561% ( 86) 00:09:26.276 13896.790 - 14002.069: 80.0901% ( 87) 00:09:26.276 14002.069 - 14107.348: 80.5694% ( 50) 00:09:26.276 14107.348 - 14212.627: 81.0199% ( 47) 00:09:26.276 14212.627 - 14317.905: 81.3554% ( 35) 00:09:26.276 14317.905 - 14423.184: 81.9018% ( 57) 00:09:26.276 14423.184 - 14528.463: 82.5729% ( 70) 00:09:26.276 14528.463 - 14633.741: 83.1001% ( 55) 00:09:26.276 14633.741 - 14739.020: 83.5219% ( 44) 00:09:26.276 14739.020 - 14844.299: 83.8382% ( 33) 00:09:26.276 14844.299 - 14949.578: 84.1929% ( 37) 00:09:26.276 14949.578 - 15054.856: 84.6722% ( 50) 00:09:26.276 15054.856 - 15160.135: 84.9597% ( 30) 00:09:26.276 15160.135 - 15265.414: 85.1419% ( 19) 00:09:26.276 15265.414 - 15370.692: 85.3528% ( 22) 00:09:26.276 15370.692 - 15475.971: 85.6020% ( 26) 00:09:26.276 15475.971 - 15581.250: 85.7841% ( 19) 00:09:26.276 15581.250 - 15686.529: 86.2634% ( 50) 00:09:26.276 15686.529 - 15791.807: 86.9057% ( 67) 00:09:26.276 15791.807 - 15897.086: 87.3850% ( 50) 00:09:26.276 15897.086 - 16002.365: 88.0081% ( 65) 00:09:26.276 16002.365 - 16107.643: 88.4873% ( 50) 00:09:26.276 16107.643 - 16212.922: 88.8324% ( 36) 00:09:26.276 16212.922 - 16318.201: 89.0913% ( 27) 00:09:26.276 16318.201 - 16423.480: 89.4076% ( 33) 00:09:26.276 16423.480 - 16528.758: 89.7431% ( 35) 00:09:26.276 16528.758 - 16634.037: 90.0594% ( 33) 00:09:26.276 16634.037 - 16739.316: 90.5579% ( 52) 00:09:26.276 16739.316 - 16844.594: 90.9893% ( 45) 00:09:26.276 16844.594 - 16949.873: 91.5357% ( 57) 00:09:26.276 16949.873 - 17055.152: 92.0341% ( 52) 00:09:26.276 17055.152 - 17160.431: 92.4080% ( 39) 00:09:26.276 17160.431 - 17265.709: 92.9064% ( 52) 00:09:26.276 17265.709 - 17370.988: 93.2611% ( 37) 00:09:26.276 17370.988 - 17476.267: 93.5775% ( 33) 00:09:26.276 17476.267 - 17581.545: 94.1334% ( 58) 00:09:26.276 17581.545 - 17686.824: 94.6607% ( 55) 00:09:26.276 17686.824 - 17792.103: 95.0729% ( 43) 00:09:26.276 17792.103 - 17897.382: 95.3604% ( 30) 00:09:26.276 17897.382 - 18002.660: 95.6097% ( 26) 00:09:26.276 18002.660 - 18107.939: 95.8972% ( 30) 00:09:26.276 18107.939 - 18213.218: 96.2040% ( 32) 00:09:26.276 18213.218 - 18318.496: 96.5395% ( 35) 00:09:26.276 18318.496 - 18423.775: 96.7216% ( 19) 00:09:26.276 18423.775 - 18529.054: 96.9038% ( 19) 00:09:26.276 18529.054 - 18634.333: 97.1434% ( 25) 00:09:26.276 18634.333 - 18739.611: 97.3735% ( 24) 00:09:26.276 18739.611 - 18844.890: 97.5364% ( 17) 00:09:26.276 18844.890 - 18950.169: 97.6419% ( 11) 00:09:26.276 18950.169 - 19055.447: 97.7186% ( 8) 00:09:26.276 19055.447 - 19160.726: 97.7857% ( 7) 00:09:26.276 19160.726 - 19266.005: 97.8432% ( 6) 00:09:26.276 19266.005 - 19371.284: 97.9103% ( 7) 00:09:26.276 19371.284 - 19476.562: 97.9774% ( 7) 00:09:26.276 19476.562 - 19581.841: 98.0349% ( 6) 00:09:26.276 19581.841 - 19687.120: 98.0924% ( 6) 00:09:26.276 19687.120 - 19792.398: 98.1308% ( 4) 00:09:26.276 19792.398 - 19897.677: 98.1595% ( 3) 00:09:26.276 22108.530 - 22213.809: 98.1787% ( 2) 00:09:26.276 22213.809 - 22319.088: 98.2266% ( 5) 00:09:26.276 22319.088 - 22424.366: 98.2745% ( 5) 00:09:26.276 22424.366 - 22529.645: 98.3033% ( 3) 00:09:26.276 22529.645 - 22634.924: 98.3512% ( 5) 00:09:26.276 22634.924 - 22740.202: 98.4087% ( 6) 00:09:26.276 22740.202 - 22845.481: 98.4567% ( 5) 00:09:26.276 22845.481 - 22950.760: 98.5046% ( 5) 00:09:26.276 22950.760 - 23056.039: 98.5621% ( 6) 00:09:26.276 23056.039 - 23161.317: 98.6100% ( 5) 00:09:26.276 23161.317 - 23266.596: 98.6580% ( 5) 00:09:26.276 23266.596 - 23371.875: 98.7059% ( 5) 00:09:26.276 23371.875 - 23477.153: 98.7538% ( 5) 00:09:26.276 23477.153 - 23582.432: 98.7730% ( 2) 00:09:26.276 26424.957 - 26530.236: 98.8305% ( 6) 00:09:26.276 26530.236 - 26635.515: 98.8880% ( 6) 00:09:26.276 26635.515 - 26740.794: 98.9551% ( 7) 00:09:26.276 26740.794 - 26846.072: 99.0127% ( 6) 00:09:26.276 26846.072 - 26951.351: 99.0702% ( 6) 00:09:26.276 26951.351 - 27161.908: 99.1948% ( 13) 00:09:26.276 27161.908 - 27372.466: 99.3194% ( 13) 00:09:26.276 27372.466 - 27583.023: 99.3865% ( 7) 00:09:26.276 34531.418 - 34741.976: 99.4824% ( 10) 00:09:26.276 34741.976 - 34952.533: 99.5974% ( 12) 00:09:26.276 34952.533 - 35163.091: 99.7124% ( 12) 00:09:26.276 35163.091 - 35373.648: 99.8275% ( 12) 00:09:26.276 35373.648 - 35584.206: 99.9425% ( 12) 00:09:26.276 35584.206 - 35794.763: 100.0000% ( 6) 00:09:26.276 00:09:26.276 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:26.276 ============================================================================== 00:09:26.276 Range in us Cumulative IO count 00:09:26.276 6106.165 - 6132.485: 0.0096% ( 1) 00:09:26.276 6343.043 - 6369.362: 0.0192% ( 1) 00:09:26.276 6369.362 - 6395.682: 0.0383% ( 2) 00:09:26.276 6395.682 - 6422.002: 0.0479% ( 1) 00:09:26.276 6422.002 - 6448.321: 0.0959% ( 5) 00:09:26.276 6448.321 - 6474.641: 0.1342% ( 4) 00:09:26.276 6474.641 - 6500.961: 0.1725% ( 4) 00:09:26.276 6500.961 - 6527.280: 0.2205% ( 5) 00:09:26.276 6527.280 - 6553.600: 0.4314% ( 22) 00:09:26.276 6553.600 - 6579.920: 0.4697% ( 4) 00:09:26.276 6579.920 - 6606.239: 0.4985% ( 3) 00:09:26.276 6606.239 - 6632.559: 0.5272% ( 3) 00:09:26.276 6632.559 - 6658.879: 0.5560% ( 3) 00:09:26.276 6658.879 - 6685.198: 0.5752% ( 2) 00:09:26.276 6685.198 - 6711.518: 0.5943% ( 2) 00:09:26.276 6711.518 - 6737.838: 0.6039% ( 1) 00:09:26.276 6737.838 - 6790.477: 0.6135% ( 1) 00:09:26.276 8159.100 - 8211.740: 0.7285% ( 12) 00:09:26.276 8211.740 - 8264.379: 0.9011% ( 18) 00:09:26.276 8264.379 - 8317.018: 1.0544% ( 16) 00:09:26.276 8317.018 - 8369.658: 1.3995% ( 36) 00:09:26.276 8369.658 - 8422.297: 1.6104% ( 22) 00:09:26.276 8422.297 - 8474.937: 1.8213% ( 22) 00:09:26.276 8474.937 - 8527.576: 2.0610% ( 25) 00:09:26.276 8527.576 - 8580.215: 2.3390% ( 29) 00:09:26.276 8580.215 - 8632.855: 2.5019% ( 17) 00:09:26.276 8632.855 - 8685.494: 2.7416% ( 25) 00:09:26.276 8685.494 - 8738.133: 3.0387% ( 31) 00:09:26.276 8738.133 - 8790.773: 3.4605% ( 44) 00:09:26.276 8790.773 - 8843.412: 3.8631% ( 42) 00:09:26.276 8843.412 - 8896.051: 4.6492% ( 82) 00:09:26.276 8896.051 - 8948.691: 5.2818% ( 66) 00:09:26.276 8948.691 - 9001.330: 5.9145% ( 66) 00:09:26.276 9001.330 - 9053.969: 6.7772% ( 90) 00:09:26.277 9053.969 - 9106.609: 7.6208% ( 88) 00:09:26.277 9106.609 - 9159.248: 8.4739% ( 89) 00:09:26.277 9159.248 - 9211.888: 9.5380% ( 111) 00:09:26.277 9211.888 - 9264.527: 10.7170% ( 123) 00:09:26.277 9264.527 - 9317.166: 11.9440% ( 128) 00:09:26.277 9317.166 - 9369.806: 12.9026% ( 100) 00:09:26.277 9369.806 - 9422.445: 13.6982% ( 83) 00:09:26.277 9422.445 - 9475.084: 14.6952% ( 104) 00:09:26.277 9475.084 - 9527.724: 15.8742% ( 123) 00:09:26.277 9527.724 - 9580.363: 16.9383% ( 111) 00:09:26.277 9580.363 - 9633.002: 18.0406% ( 115) 00:09:26.277 9633.002 - 9685.642: 19.8236% ( 186) 00:09:26.277 9685.642 - 9738.281: 21.1465% ( 138) 00:09:26.277 9738.281 - 9790.920: 22.3831% ( 129) 00:09:26.277 9790.920 - 9843.560: 23.9743% ( 166) 00:09:26.277 9843.560 - 9896.199: 25.1630% ( 124) 00:09:26.277 9896.199 - 9948.839: 26.5433% ( 144) 00:09:26.277 9948.839 - 10001.478: 27.8087% ( 132) 00:09:26.277 10001.478 - 10054.117: 29.2082% ( 146) 00:09:26.277 10054.117 - 10106.757: 30.7324% ( 159) 00:09:26.277 10106.757 - 10159.396: 32.1990% ( 153) 00:09:26.277 10159.396 - 10212.035: 33.5985% ( 146) 00:09:26.277 10212.035 - 10264.675: 34.7393% ( 119) 00:09:26.277 10264.675 - 10317.314: 35.8704% ( 118) 00:09:26.277 10317.314 - 10369.953: 36.7715% ( 94) 00:09:26.277 10369.953 - 10422.593: 37.5767% ( 84) 00:09:26.277 10422.593 - 10475.232: 38.9379% ( 142) 00:09:26.277 10475.232 - 10527.871: 39.7623% ( 86) 00:09:26.277 10527.871 - 10580.511: 40.3758% ( 64) 00:09:26.277 10580.511 - 10633.150: 40.9988% ( 65) 00:09:26.277 10633.150 - 10685.790: 41.8328% ( 87) 00:09:26.277 10685.790 - 10738.429: 42.3600% ( 55) 00:09:26.277 10738.429 - 10791.068: 42.9064% ( 57) 00:09:26.277 10791.068 - 10843.708: 43.4337% ( 55) 00:09:26.277 10843.708 - 10896.347: 43.6925% ( 27) 00:09:26.277 10896.347 - 10948.986: 43.8842% ( 20) 00:09:26.277 10948.986 - 11001.626: 44.1143% ( 24) 00:09:26.277 11001.626 - 11054.265: 44.4306% ( 33) 00:09:26.277 11054.265 - 11106.904: 44.7469% ( 33) 00:09:26.277 11106.904 - 11159.544: 45.0633% ( 33) 00:09:26.277 11159.544 - 11212.183: 45.1879% ( 13) 00:09:26.277 11212.183 - 11264.822: 45.3413% ( 16) 00:09:26.277 11264.822 - 11317.462: 45.4850% ( 15) 00:09:26.277 11317.462 - 11370.101: 45.6768% ( 20) 00:09:26.277 11370.101 - 11422.741: 45.8206% ( 15) 00:09:26.277 11422.741 - 11475.380: 45.9452% ( 13) 00:09:26.277 11475.380 - 11528.019: 46.0219% ( 8) 00:09:26.277 11528.019 - 11580.659: 46.1369% ( 12) 00:09:26.277 11580.659 - 11633.298: 46.2807% ( 15) 00:09:26.277 11633.298 - 11685.937: 46.4916% ( 22) 00:09:26.277 11685.937 - 11738.577: 46.7887% ( 31) 00:09:26.277 11738.577 - 11791.216: 47.0667% ( 29) 00:09:26.277 11791.216 - 11843.855: 47.3255% ( 27) 00:09:26.277 11843.855 - 11896.495: 47.8048% ( 50) 00:09:26.277 11896.495 - 11949.134: 48.4758% ( 70) 00:09:26.277 11949.134 - 12001.773: 49.0702% ( 62) 00:09:26.277 12001.773 - 12054.413: 49.5111% ( 46) 00:09:26.277 12054.413 - 12107.052: 50.0383% ( 55) 00:09:26.277 12107.052 - 12159.692: 50.3547% ( 33) 00:09:26.277 12159.692 - 12212.331: 50.8052% ( 47) 00:09:26.277 12212.331 - 12264.970: 51.5721% ( 80) 00:09:26.277 12264.970 - 12317.610: 52.8566% ( 134) 00:09:26.277 12317.610 - 12370.249: 54.2370% ( 144) 00:09:26.277 12370.249 - 12422.888: 55.7324% ( 156) 00:09:26.277 12422.888 - 12475.528: 57.5058% ( 185) 00:09:26.277 12475.528 - 12528.167: 59.1258% ( 169) 00:09:26.277 12528.167 - 12580.806: 60.8704% ( 182) 00:09:26.277 12580.806 - 12633.446: 62.3946% ( 159) 00:09:26.277 12633.446 - 12686.085: 63.6120% ( 127) 00:09:26.277 12686.085 - 12738.724: 65.3470% ( 181) 00:09:26.277 12738.724 - 12791.364: 66.8424% ( 156) 00:09:26.277 12791.364 - 12844.003: 68.0502% ( 126) 00:09:26.277 12844.003 - 12896.643: 68.9225% ( 91) 00:09:26.277 12896.643 - 12949.282: 69.9003% ( 102) 00:09:26.277 12949.282 - 13001.921: 70.9931% ( 114) 00:09:26.277 13001.921 - 13054.561: 71.9229% ( 97) 00:09:26.277 13054.561 - 13107.200: 72.7569% ( 87) 00:09:26.277 13107.200 - 13159.839: 73.5525% ( 83) 00:09:26.277 13159.839 - 13212.479: 74.1181% ( 59) 00:09:26.277 13212.479 - 13265.118: 74.5303% ( 43) 00:09:26.277 13265.118 - 13317.757: 74.8946% ( 38) 00:09:26.277 13317.757 - 13370.397: 75.2972% ( 42) 00:09:26.277 13370.397 - 13423.036: 75.6806% ( 40) 00:09:26.277 13423.036 - 13475.676: 76.0161% ( 35) 00:09:26.277 13475.676 - 13580.954: 76.8597% ( 88) 00:09:26.277 13580.954 - 13686.233: 77.3294% ( 49) 00:09:26.277 13686.233 - 13791.512: 77.9525% ( 65) 00:09:26.277 13791.512 - 13896.790: 78.7097% ( 79) 00:09:26.277 13896.790 - 14002.069: 79.1986% ( 51) 00:09:26.277 14002.069 - 14107.348: 79.6683% ( 49) 00:09:26.277 14107.348 - 14212.627: 80.1956% ( 55) 00:09:26.277 14212.627 - 14317.905: 80.7611% ( 59) 00:09:26.277 14317.905 - 14423.184: 81.2500% ( 51) 00:09:26.277 14423.184 - 14528.463: 81.7676% ( 54) 00:09:26.277 14528.463 - 14633.741: 82.2565% ( 51) 00:09:26.277 14633.741 - 14739.020: 82.9179% ( 69) 00:09:26.277 14739.020 - 14844.299: 83.7519% ( 87) 00:09:26.277 14844.299 - 14949.578: 84.2120% ( 48) 00:09:26.277 14949.578 - 15054.856: 84.7105% ( 52) 00:09:26.277 15054.856 - 15160.135: 85.0652% ( 37) 00:09:26.277 15160.135 - 15265.414: 85.4199% ( 37) 00:09:26.277 15265.414 - 15370.692: 85.8033% ( 40) 00:09:26.277 15370.692 - 15475.971: 86.2922% ( 51) 00:09:26.277 15475.971 - 15581.250: 86.9057% ( 64) 00:09:26.277 15581.250 - 15686.529: 87.4904% ( 61) 00:09:26.277 15686.529 - 15791.807: 88.2189% ( 76) 00:09:26.277 15791.807 - 15897.086: 88.8516% ( 66) 00:09:26.277 15897.086 - 16002.365: 89.5706% ( 75) 00:09:26.277 16002.365 - 16107.643: 90.2224% ( 68) 00:09:26.277 16107.643 - 16212.922: 90.7496% ( 55) 00:09:26.277 16212.922 - 16318.201: 91.3439% ( 62) 00:09:26.277 16318.201 - 16423.480: 91.6507% ( 32) 00:09:26.277 16423.480 - 16528.758: 91.7945% ( 15) 00:09:26.277 16528.758 - 16634.037: 91.9383% ( 15) 00:09:26.277 16634.037 - 16739.316: 92.0725% ( 14) 00:09:26.277 16739.316 - 16844.594: 92.1683% ( 10) 00:09:26.277 16844.594 - 16949.873: 92.2738% ( 11) 00:09:26.277 16949.873 - 17055.152: 92.4271% ( 16) 00:09:26.277 17055.152 - 17160.431: 92.5518% ( 13) 00:09:26.277 17160.431 - 17265.709: 92.7147% ( 17) 00:09:26.277 17265.709 - 17370.988: 92.9448% ( 24) 00:09:26.277 17370.988 - 17476.267: 93.1077% ( 17) 00:09:26.277 17476.267 - 17581.545: 93.4624% ( 37) 00:09:26.277 17581.545 - 17686.824: 93.7979% ( 35) 00:09:26.277 17686.824 - 17792.103: 94.2293% ( 45) 00:09:26.277 17792.103 - 17897.382: 94.4689% ( 25) 00:09:26.277 17897.382 - 18002.660: 94.6990% ( 24) 00:09:26.277 18002.660 - 18107.939: 94.9387% ( 25) 00:09:26.277 18107.939 - 18213.218: 95.1304% ( 20) 00:09:26.277 18213.218 - 18318.496: 95.3125% ( 19) 00:09:26.277 18318.496 - 18423.775: 95.4659% ( 16) 00:09:26.277 18423.775 - 18529.054: 95.6288% ( 17) 00:09:26.277 18529.054 - 18634.333: 95.8877% ( 27) 00:09:26.277 18634.333 - 18739.611: 96.1561% ( 28) 00:09:26.277 18739.611 - 18844.890: 96.6162% ( 48) 00:09:26.277 18844.890 - 18950.169: 96.9038% ( 30) 00:09:26.277 18950.169 - 19055.447: 97.2105% ( 32) 00:09:26.277 19055.447 - 19160.726: 97.6706% ( 48) 00:09:26.277 19160.726 - 19266.005: 98.0061% ( 35) 00:09:26.277 19266.005 - 19371.284: 98.1979% ( 20) 00:09:26.277 19371.284 - 19476.562: 98.3608% ( 17) 00:09:26.277 19476.562 - 19581.841: 98.4183% ( 6) 00:09:26.277 19581.841 - 19687.120: 98.4663% ( 5) 00:09:26.277 19687.120 - 19792.398: 98.5142% ( 5) 00:09:26.277 19792.398 - 19897.677: 98.5717% ( 6) 00:09:26.277 19897.677 - 20002.956: 98.6196% ( 5) 00:09:26.277 20002.956 - 20108.235: 98.6771% ( 6) 00:09:26.277 20108.235 - 20213.513: 98.7251% ( 5) 00:09:26.277 20213.513 - 20318.792: 98.7730% ( 5) 00:09:26.277 27372.466 - 27583.023: 98.8497% ( 8) 00:09:26.277 27583.023 - 27793.581: 98.9551% ( 11) 00:09:26.277 27793.581 - 28004.138: 99.0702% ( 12) 00:09:26.277 28004.138 - 28214.696: 99.1852% ( 12) 00:09:26.277 28214.696 - 28425.253: 99.3002% ( 12) 00:09:26.277 28425.253 - 28635.810: 99.3865% ( 9) 00:09:26.277 35163.091 - 35373.648: 99.4248% ( 4) 00:09:26.277 35373.648 - 35584.206: 99.5399% ( 12) 00:09:26.277 35584.206 - 35794.763: 99.6741% ( 14) 00:09:26.277 35794.763 - 36005.320: 99.7987% ( 13) 00:09:26.277 36005.320 - 36215.878: 99.9233% ( 13) 00:09:26.277 36215.878 - 36426.435: 100.0000% ( 8) 00:09:26.277 00:09:26.277 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:26.277 ============================================================================== 00:09:26.277 Range in us Cumulative IO count 00:09:26.277 6000.887 - 6027.206: 0.0096% ( 1) 00:09:26.277 6053.526 - 6079.846: 0.0192% ( 1) 00:09:26.277 6185.124 - 6211.444: 0.0383% ( 2) 00:09:26.277 6211.444 - 6237.764: 0.0575% ( 2) 00:09:26.277 6237.764 - 6264.084: 0.0671% ( 1) 00:09:26.277 6264.084 - 6290.403: 0.0863% ( 2) 00:09:26.277 6290.403 - 6316.723: 0.1150% ( 3) 00:09:26.277 6316.723 - 6343.043: 0.1438% ( 3) 00:09:26.277 6343.043 - 6369.362: 0.1821% ( 4) 00:09:26.277 6369.362 - 6395.682: 0.2301% ( 5) 00:09:26.277 6395.682 - 6422.002: 0.2492% ( 2) 00:09:26.277 6422.002 - 6448.321: 0.2876% ( 4) 00:09:26.277 6448.321 - 6474.641: 0.3834% ( 10) 00:09:26.277 6474.641 - 6500.961: 0.4026% ( 2) 00:09:26.277 6500.961 - 6527.280: 0.4410% ( 4) 00:09:26.277 6527.280 - 6553.600: 0.4793% ( 4) 00:09:26.277 6553.600 - 6579.920: 0.5176% ( 4) 00:09:26.278 6579.920 - 6606.239: 0.5560% ( 4) 00:09:26.278 6606.239 - 6632.559: 0.5656% ( 1) 00:09:26.278 6632.559 - 6658.879: 0.5847% ( 2) 00:09:26.278 6658.879 - 6685.198: 0.6039% ( 2) 00:09:26.278 6685.198 - 6711.518: 0.6135% ( 1) 00:09:26.278 8053.822 - 8106.461: 0.6423% ( 3) 00:09:26.278 8106.461 - 8159.100: 0.6902% ( 5) 00:09:26.278 8159.100 - 8211.740: 0.8052% ( 12) 00:09:26.278 8211.740 - 8264.379: 0.9778% ( 18) 00:09:26.278 8264.379 - 8317.018: 1.2174% ( 25) 00:09:26.278 8317.018 - 8369.658: 1.4762% ( 27) 00:09:26.278 8369.658 - 8422.297: 1.6871% ( 22) 00:09:26.278 8422.297 - 8474.937: 1.9076% ( 23) 00:09:26.278 8474.937 - 8527.576: 2.0993% ( 20) 00:09:26.278 8527.576 - 8580.215: 2.4923% ( 41) 00:09:26.278 8580.215 - 8632.855: 2.7991% ( 32) 00:09:26.278 8632.855 - 8685.494: 3.2496% ( 47) 00:09:26.278 8685.494 - 8738.133: 3.5947% ( 36) 00:09:26.278 8738.133 - 8790.773: 3.9206% ( 34) 00:09:26.278 8790.773 - 8843.412: 4.3041% ( 40) 00:09:26.278 8843.412 - 8896.051: 4.7163% ( 43) 00:09:26.278 8896.051 - 8948.691: 5.2339% ( 54) 00:09:26.278 8948.691 - 9001.330: 5.8186% ( 61) 00:09:26.278 9001.330 - 9053.969: 6.6526% ( 87) 00:09:26.278 9053.969 - 9106.609: 7.5729% ( 96) 00:09:26.278 9106.609 - 9159.248: 8.8765% ( 136) 00:09:26.278 9159.248 - 9211.888: 10.0939% ( 127) 00:09:26.278 9211.888 - 9264.527: 11.2347% ( 119) 00:09:26.278 9264.527 - 9317.166: 12.2604% ( 107) 00:09:26.278 9317.166 - 9369.806: 13.4298% ( 122) 00:09:26.278 9369.806 - 9422.445: 14.6664% ( 129) 00:09:26.278 9422.445 - 9475.084: 15.8359% ( 122) 00:09:26.278 9475.084 - 9527.724: 16.9670% ( 118) 00:09:26.278 9527.724 - 9580.363: 18.1844% ( 127) 00:09:26.278 9580.363 - 9633.002: 19.4785% ( 135) 00:09:26.278 9633.002 - 9685.642: 20.8877% ( 147) 00:09:26.278 9685.642 - 9738.281: 22.3255% ( 150) 00:09:26.278 9738.281 - 9790.920: 23.6292% ( 136) 00:09:26.278 9790.920 - 9843.560: 24.6837% ( 110) 00:09:26.278 9843.560 - 9896.199: 26.0257% ( 140) 00:09:26.278 9896.199 - 9948.839: 27.2719% ( 130) 00:09:26.278 9948.839 - 10001.478: 28.4605% ( 124) 00:09:26.278 10001.478 - 10054.117: 29.6683% ( 126) 00:09:26.278 10054.117 - 10106.757: 31.1446% ( 154) 00:09:26.278 10106.757 - 10159.396: 32.3236% ( 123) 00:09:26.278 10159.396 - 10212.035: 33.4739% ( 120) 00:09:26.278 10212.035 - 10264.675: 34.5284% ( 110) 00:09:26.278 10264.675 - 10317.314: 35.4103% ( 92) 00:09:26.278 10317.314 - 10369.953: 36.5606% ( 120) 00:09:26.278 10369.953 - 10422.593: 37.1262% ( 59) 00:09:26.278 10422.593 - 10475.232: 37.9793% ( 89) 00:09:26.278 10475.232 - 10527.871: 38.7270% ( 78) 00:09:26.278 10527.871 - 10580.511: 39.6377% ( 95) 00:09:26.278 10580.511 - 10633.150: 40.2799% ( 67) 00:09:26.278 10633.150 - 10685.790: 41.3056% ( 107) 00:09:26.278 10685.790 - 10738.429: 41.7370% ( 45) 00:09:26.278 10738.429 - 10791.068: 42.1204% ( 40) 00:09:26.278 10791.068 - 10843.708: 42.6668% ( 57) 00:09:26.278 10843.708 - 10896.347: 43.0598% ( 41) 00:09:26.278 10896.347 - 10948.986: 43.4337% ( 39) 00:09:26.278 10948.986 - 11001.626: 43.8842% ( 47) 00:09:26.278 11001.626 - 11054.265: 44.2197% ( 35) 00:09:26.278 11054.265 - 11106.904: 44.5169% ( 31) 00:09:26.278 11106.904 - 11159.544: 44.7949% ( 29) 00:09:26.278 11159.544 - 11212.183: 45.0249% ( 24) 00:09:26.278 11212.183 - 11264.822: 45.2071% ( 19) 00:09:26.278 11264.822 - 11317.462: 45.3508% ( 15) 00:09:26.278 11317.462 - 11370.101: 45.5042% ( 16) 00:09:26.278 11370.101 - 11422.741: 45.7151% ( 22) 00:09:26.278 11422.741 - 11475.380: 45.9739% ( 27) 00:09:26.278 11475.380 - 11528.019: 46.2232% ( 26) 00:09:26.278 11528.019 - 11580.659: 46.4245% ( 21) 00:09:26.278 11580.659 - 11633.298: 46.7025% ( 29) 00:09:26.278 11633.298 - 11685.937: 47.0092% ( 32) 00:09:26.278 11685.937 - 11738.577: 47.2584% ( 26) 00:09:26.278 11738.577 - 11791.216: 47.5077% ( 26) 00:09:26.278 11791.216 - 11843.855: 47.7281% ( 23) 00:09:26.278 11843.855 - 11896.495: 48.1979% ( 49) 00:09:26.278 11896.495 - 11949.134: 48.4854% ( 30) 00:09:26.278 11949.134 - 12001.773: 48.7826% ( 31) 00:09:26.278 12001.773 - 12054.413: 49.2906% ( 53) 00:09:26.278 12054.413 - 12107.052: 49.6741% ( 40) 00:09:26.278 12107.052 - 12159.692: 50.2588% ( 61) 00:09:26.278 12159.692 - 12212.331: 51.0257% ( 80) 00:09:26.278 12212.331 - 12264.970: 52.1664% ( 119) 00:09:26.278 12264.970 - 12317.610: 53.3071% ( 119) 00:09:26.278 12317.610 - 12370.249: 54.6396% ( 139) 00:09:26.278 12370.249 - 12422.888: 56.2404% ( 167) 00:09:26.278 12422.888 - 12475.528: 57.9084% ( 174) 00:09:26.278 12475.528 - 12528.167: 59.6434% ( 181) 00:09:26.278 12528.167 - 12580.806: 61.3113% ( 174) 00:09:26.278 12580.806 - 12633.446: 62.7109% ( 146) 00:09:26.278 12633.446 - 12686.085: 64.2446% ( 160) 00:09:26.278 12686.085 - 12738.724: 65.6921% ( 151) 00:09:26.278 12738.724 - 12791.364: 67.1971% ( 157) 00:09:26.278 12791.364 - 12844.003: 68.4049% ( 126) 00:09:26.278 12844.003 - 12896.643: 69.5169% ( 116) 00:09:26.278 12896.643 - 12949.282: 70.4371% ( 96) 00:09:26.278 12949.282 - 13001.921: 71.4820% ( 109) 00:09:26.278 13001.921 - 13054.561: 72.3543% ( 91) 00:09:26.278 13054.561 - 13107.200: 73.0637% ( 74) 00:09:26.278 13107.200 - 13159.839: 73.7538% ( 72) 00:09:26.278 13159.839 - 13212.479: 74.2523% ( 52) 00:09:26.278 13212.479 - 13265.118: 74.5782% ( 34) 00:09:26.278 13265.118 - 13317.757: 74.8658% ( 30) 00:09:26.278 13317.757 - 13370.397: 75.1630% ( 31) 00:09:26.278 13370.397 - 13423.036: 75.5176% ( 37) 00:09:26.278 13423.036 - 13475.676: 75.8723% ( 37) 00:09:26.278 13475.676 - 13580.954: 76.5721% ( 73) 00:09:26.278 13580.954 - 13686.233: 77.2048% ( 66) 00:09:26.278 13686.233 - 13791.512: 77.8854% ( 71) 00:09:26.278 13791.512 - 13896.790: 78.4222% ( 56) 00:09:26.278 13896.790 - 14002.069: 78.8344% ( 43) 00:09:26.278 14002.069 - 14107.348: 79.3137% ( 50) 00:09:26.278 14107.348 - 14212.627: 79.7354% ( 44) 00:09:26.278 14212.627 - 14317.905: 80.0613% ( 34) 00:09:26.278 14317.905 - 14423.184: 80.4256% ( 38) 00:09:26.278 14423.184 - 14528.463: 80.9145% ( 51) 00:09:26.278 14528.463 - 14633.741: 81.3267% ( 43) 00:09:26.278 14633.741 - 14739.020: 81.9977% ( 70) 00:09:26.278 14739.020 - 14844.299: 82.6591% ( 69) 00:09:26.278 14844.299 - 14949.578: 83.3493% ( 72) 00:09:26.278 14949.578 - 15054.856: 83.8094% ( 48) 00:09:26.278 15054.856 - 15160.135: 84.7297% ( 96) 00:09:26.278 15160.135 - 15265.414: 85.5828% ( 89) 00:09:26.278 15265.414 - 15370.692: 86.2730% ( 72) 00:09:26.278 15370.692 - 15475.971: 87.0207% ( 78) 00:09:26.278 15475.971 - 15581.250: 87.5383% ( 54) 00:09:26.278 15581.250 - 15686.529: 88.0368% ( 52) 00:09:26.278 15686.529 - 15791.807: 88.7653% ( 76) 00:09:26.278 15791.807 - 15897.086: 89.5226% ( 79) 00:09:26.278 15897.086 - 16002.365: 90.0882% ( 59) 00:09:26.278 16002.365 - 16107.643: 90.6921% ( 63) 00:09:26.278 16107.643 - 16212.922: 91.2289% ( 56) 00:09:26.278 16212.922 - 16318.201: 91.4877% ( 27) 00:09:26.278 16318.201 - 16423.480: 91.6986% ( 22) 00:09:26.278 16423.480 - 16528.758: 91.9287% ( 24) 00:09:26.278 16528.758 - 16634.037: 92.1971% ( 28) 00:09:26.278 16634.037 - 16739.316: 92.3217% ( 13) 00:09:26.278 16739.316 - 16844.594: 92.4367% ( 12) 00:09:26.278 16844.594 - 16949.873: 92.5230% ( 9) 00:09:26.278 16949.873 - 17055.152: 92.6476% ( 13) 00:09:26.278 17055.152 - 17160.431: 92.8393% ( 20) 00:09:26.278 17160.431 - 17265.709: 93.0311% ( 20) 00:09:26.278 17265.709 - 17370.988: 93.1461% ( 12) 00:09:26.278 17370.988 - 17476.267: 93.2899% ( 15) 00:09:26.278 17476.267 - 17581.545: 93.5487% ( 27) 00:09:26.278 17581.545 - 17686.824: 93.7979% ( 26) 00:09:26.278 17686.824 - 17792.103: 94.0663% ( 28) 00:09:26.278 17792.103 - 17897.382: 94.3060% ( 25) 00:09:26.278 17897.382 - 18002.660: 94.6798% ( 39) 00:09:26.278 18002.660 - 18107.939: 95.0058% ( 34) 00:09:26.278 18107.939 - 18213.218: 95.2933% ( 30) 00:09:26.279 18213.218 - 18318.496: 95.5426% ( 26) 00:09:26.279 18318.496 - 18423.775: 95.8014% ( 27) 00:09:26.279 18423.775 - 18529.054: 95.9931% ( 20) 00:09:26.279 18529.054 - 18634.333: 96.2136% ( 23) 00:09:26.279 18634.333 - 18739.611: 96.3861% ( 18) 00:09:26.279 18739.611 - 18844.890: 96.5012% ( 12) 00:09:26.279 18844.890 - 18950.169: 96.5874% ( 9) 00:09:26.279 18950.169 - 19055.447: 96.7216% ( 14) 00:09:26.279 19055.447 - 19160.726: 96.8750% ( 16) 00:09:26.279 19160.726 - 19266.005: 97.2680% ( 41) 00:09:26.279 19266.005 - 19371.284: 97.4981% ( 24) 00:09:26.279 19371.284 - 19476.562: 97.6515% ( 16) 00:09:26.279 19476.562 - 19581.841: 97.8432% ( 20) 00:09:26.279 19581.841 - 19687.120: 98.0445% ( 21) 00:09:26.279 19687.120 - 19792.398: 98.3992% ( 37) 00:09:26.279 19792.398 - 19897.677: 98.5621% ( 17) 00:09:26.279 19897.677 - 20002.956: 98.6771% ( 12) 00:09:26.279 20002.956 - 20108.235: 98.7538% ( 8) 00:09:26.279 20108.235 - 20213.513: 98.7730% ( 2) 00:09:26.279 27161.908 - 27372.466: 98.8209% ( 5) 00:09:26.279 27372.466 - 27583.023: 98.9264% ( 11) 00:09:26.279 27583.023 - 27793.581: 99.0414% ( 12) 00:09:26.279 27793.581 - 28004.138: 99.1469% ( 11) 00:09:26.279 28004.138 - 28214.696: 99.2619% ( 12) 00:09:26.279 28214.696 - 28425.253: 99.3673% ( 11) 00:09:26.279 28425.253 - 28635.810: 99.3865% ( 2) 00:09:26.279 34741.976 - 34952.533: 99.4440% ( 6) 00:09:26.279 34952.533 - 35163.091: 99.5590% ( 12) 00:09:26.279 35163.091 - 35373.648: 99.6837% ( 13) 00:09:26.279 35373.648 - 35584.206: 99.8083% ( 13) 00:09:26.279 35584.206 - 35794.763: 99.9425% ( 14) 00:09:26.279 35794.763 - 36005.320: 100.0000% ( 6) 00:09:26.279 00:09:26.279 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:26.279 ============================================================================== 00:09:26.279 Range in us Cumulative IO count 00:09:26.279 5948.247 - 5974.567: 0.0096% ( 1) 00:09:26.279 5974.567 - 6000.887: 0.0288% ( 2) 00:09:26.279 6000.887 - 6027.206: 0.0863% ( 6) 00:09:26.279 6027.206 - 6053.526: 0.1342% ( 5) 00:09:26.279 6053.526 - 6079.846: 0.1725% ( 4) 00:09:26.279 6079.846 - 6106.165: 0.2205% ( 5) 00:09:26.279 6106.165 - 6132.485: 0.3643% ( 15) 00:09:26.279 6132.485 - 6158.805: 0.4314% ( 7) 00:09:26.279 6158.805 - 6185.124: 0.4697% ( 4) 00:09:26.279 6185.124 - 6211.444: 0.5081% ( 4) 00:09:26.279 6211.444 - 6237.764: 0.5464% ( 4) 00:09:26.279 6237.764 - 6264.084: 0.5656% ( 2) 00:09:26.279 6264.084 - 6290.403: 0.5847% ( 2) 00:09:26.279 6290.403 - 6316.723: 0.6039% ( 2) 00:09:26.279 6316.723 - 6343.043: 0.6135% ( 1) 00:09:26.279 7895.904 - 7948.543: 0.6231% ( 1) 00:09:26.279 7948.543 - 8001.182: 0.6614% ( 4) 00:09:26.279 8001.182 - 8053.822: 0.7765% ( 12) 00:09:26.279 8053.822 - 8106.461: 0.9490% ( 18) 00:09:26.279 8106.461 - 8159.100: 1.1407% ( 20) 00:09:26.279 8159.100 - 8211.740: 1.4475% ( 32) 00:09:26.279 8211.740 - 8264.379: 1.5913% ( 15) 00:09:26.279 8264.379 - 8317.018: 1.7159% ( 13) 00:09:26.279 8317.018 - 8369.658: 1.8021% ( 9) 00:09:26.279 8369.658 - 8422.297: 1.9268% ( 13) 00:09:26.279 8422.297 - 8474.937: 2.0706% ( 15) 00:09:26.279 8474.937 - 8527.576: 2.3677% ( 31) 00:09:26.279 8527.576 - 8580.215: 2.8758% ( 53) 00:09:26.279 8580.215 - 8632.855: 3.3455% ( 49) 00:09:26.279 8632.855 - 8685.494: 3.8823% ( 56) 00:09:26.279 8685.494 - 8738.133: 4.2753% ( 41) 00:09:26.279 8738.133 - 8790.773: 4.5629% ( 30) 00:09:26.279 8790.773 - 8843.412: 4.8121% ( 26) 00:09:26.279 8843.412 - 8896.051: 5.0997% ( 30) 00:09:26.279 8896.051 - 8948.691: 5.3393% ( 25) 00:09:26.279 8948.691 - 9001.330: 5.8282% ( 51) 00:09:26.279 9001.330 - 9053.969: 6.2212% ( 41) 00:09:26.279 9053.969 - 9106.609: 6.7101% ( 51) 00:09:26.279 9106.609 - 9159.248: 7.5058% ( 83) 00:09:26.279 9159.248 - 9211.888: 8.5794% ( 112) 00:09:26.279 9211.888 - 9264.527: 9.8639% ( 134) 00:09:26.279 9264.527 - 9317.166: 11.2826% ( 148) 00:09:26.279 9317.166 - 9369.806: 12.8738% ( 166) 00:09:26.279 9369.806 - 9422.445: 14.0529% ( 123) 00:09:26.279 9422.445 - 9475.084: 15.4812% ( 149) 00:09:26.279 9475.084 - 9527.724: 16.7561% ( 133) 00:09:26.279 9527.724 - 9580.363: 18.0502% ( 135) 00:09:26.279 9580.363 - 9633.002: 19.7565% ( 178) 00:09:26.279 9633.002 - 9685.642: 20.9356% ( 123) 00:09:26.279 9685.642 - 9738.281: 22.1530% ( 127) 00:09:26.279 9738.281 - 9790.920: 23.4087% ( 131) 00:09:26.279 9790.920 - 9843.560: 24.7316% ( 138) 00:09:26.279 9843.560 - 9896.199: 26.0257% ( 135) 00:09:26.279 9896.199 - 9948.839: 27.2143% ( 124) 00:09:26.279 9948.839 - 10001.478: 28.4509% ( 129) 00:09:26.279 10001.478 - 10054.117: 29.8696% ( 148) 00:09:26.279 10054.117 - 10106.757: 31.2596% ( 145) 00:09:26.279 10106.757 - 10159.396: 32.6783% ( 148) 00:09:26.279 10159.396 - 10212.035: 33.7136% ( 108) 00:09:26.279 10212.035 - 10264.675: 34.7009% ( 103) 00:09:26.279 10264.675 - 10317.314: 35.4199% ( 75) 00:09:26.279 10317.314 - 10369.953: 36.2922% ( 91) 00:09:26.279 10369.953 - 10422.593: 37.2220% ( 97) 00:09:26.279 10422.593 - 10475.232: 37.8451% ( 65) 00:09:26.279 10475.232 - 10527.871: 38.4394% ( 62) 00:09:26.279 10527.871 - 10580.511: 38.9475% ( 53) 00:09:26.279 10580.511 - 10633.150: 39.6760% ( 76) 00:09:26.279 10633.150 - 10685.790: 40.3949% ( 75) 00:09:26.279 10685.790 - 10738.429: 40.8646% ( 49) 00:09:26.279 10738.429 - 10791.068: 41.4973% ( 66) 00:09:26.279 10791.068 - 10843.708: 42.0629% ( 59) 00:09:26.279 10843.708 - 10896.347: 42.5134% ( 47) 00:09:26.279 10896.347 - 10948.986: 43.0790% ( 59) 00:09:26.279 10948.986 - 11001.626: 43.3857% ( 32) 00:09:26.279 11001.626 - 11054.265: 43.6733% ( 30) 00:09:26.279 11054.265 - 11106.904: 43.9417% ( 28) 00:09:26.279 11106.904 - 11159.544: 44.0663% ( 13) 00:09:26.279 11159.544 - 11212.183: 44.2005% ( 14) 00:09:26.279 11212.183 - 11264.822: 44.3443% ( 15) 00:09:26.279 11264.822 - 11317.462: 44.5936% ( 26) 00:09:26.279 11317.462 - 11370.101: 44.8620% ( 28) 00:09:26.279 11370.101 - 11422.741: 45.2837% ( 44) 00:09:26.279 11422.741 - 11475.380: 45.5234% ( 25) 00:09:26.279 11475.380 - 11528.019: 45.8781% ( 37) 00:09:26.279 11528.019 - 11580.659: 46.2903% ( 43) 00:09:26.279 11580.659 - 11633.298: 46.6833% ( 41) 00:09:26.279 11633.298 - 11685.937: 47.0475% ( 38) 00:09:26.279 11685.937 - 11738.577: 47.3735% ( 34) 00:09:26.279 11738.577 - 11791.216: 47.5939% ( 23) 00:09:26.279 11791.216 - 11843.855: 47.7857% ( 20) 00:09:26.279 11843.855 - 11896.495: 47.9582% ( 18) 00:09:26.279 11896.495 - 11949.134: 48.2937% ( 35) 00:09:26.279 11949.134 - 12001.773: 48.6867% ( 41) 00:09:26.279 12001.773 - 12054.413: 49.1852% ( 52) 00:09:26.279 12054.413 - 12107.052: 49.8562% ( 70) 00:09:26.279 12107.052 - 12159.692: 50.7285% ( 91) 00:09:26.279 12159.692 - 12212.331: 51.7830% ( 110) 00:09:26.279 12212.331 - 12264.970: 52.8374% ( 110) 00:09:26.279 12264.970 - 12317.610: 53.9398% ( 115) 00:09:26.279 12317.610 - 12370.249: 55.3489% ( 147) 00:09:26.279 12370.249 - 12422.888: 56.7964% ( 151) 00:09:26.279 12422.888 - 12475.528: 58.3972% ( 167) 00:09:26.279 12475.528 - 12528.167: 59.7872% ( 145) 00:09:26.279 12528.167 - 12580.806: 61.2730% ( 155) 00:09:26.279 12580.806 - 12633.446: 62.9314% ( 173) 00:09:26.279 12633.446 - 12686.085: 64.1775% ( 130) 00:09:26.279 12686.085 - 12738.724: 65.5004% ( 138) 00:09:26.279 12738.724 - 12791.364: 67.0054% ( 157) 00:09:26.279 12791.364 - 12844.003: 68.0406% ( 108) 00:09:26.279 12844.003 - 12896.643: 68.8842% ( 88) 00:09:26.279 12896.643 - 12949.282: 69.9962% ( 116) 00:09:26.279 12949.282 - 13001.921: 70.8972% ( 94) 00:09:26.279 13001.921 - 13054.561: 71.7408% ( 88) 00:09:26.279 13054.561 - 13107.200: 72.7090% ( 101) 00:09:26.279 13107.200 - 13159.839: 73.6771% ( 101) 00:09:26.279 13159.839 - 13212.479: 74.7124% ( 108) 00:09:26.279 13212.479 - 13265.118: 75.3451% ( 66) 00:09:26.279 13265.118 - 13317.757: 75.8244% ( 50) 00:09:26.279 13317.757 - 13370.397: 76.2941% ( 49) 00:09:26.279 13370.397 - 13423.036: 76.6967% ( 42) 00:09:26.279 13423.036 - 13475.676: 76.9939% ( 31) 00:09:26.279 13475.676 - 13580.954: 77.3581% ( 38) 00:09:26.279 13580.954 - 13686.233: 77.7128% ( 37) 00:09:26.279 13686.233 - 13791.512: 78.2784% ( 59) 00:09:26.279 13791.512 - 13896.790: 78.8727% ( 62) 00:09:26.279 13896.790 - 14002.069: 79.3424% ( 49) 00:09:26.279 14002.069 - 14107.348: 79.7354% ( 41) 00:09:26.279 14107.348 - 14212.627: 80.2818% ( 57) 00:09:26.279 14212.627 - 14317.905: 80.7707% ( 51) 00:09:26.279 14317.905 - 14423.184: 81.1829% ( 43) 00:09:26.279 14423.184 - 14528.463: 81.6910% ( 53) 00:09:26.279 14528.463 - 14633.741: 82.0648% ( 39) 00:09:26.279 14633.741 - 14739.020: 82.4578% ( 41) 00:09:26.279 14739.020 - 14844.299: 82.8700% ( 43) 00:09:26.279 14844.299 - 14949.578: 83.2439% ( 39) 00:09:26.279 14949.578 - 15054.856: 83.7423% ( 52) 00:09:26.279 15054.856 - 15160.135: 84.2696% ( 55) 00:09:26.279 15160.135 - 15265.414: 84.7872% ( 54) 00:09:26.279 15265.414 - 15370.692: 85.2761% ( 51) 00:09:26.279 15370.692 - 15475.971: 85.8416% ( 59) 00:09:26.279 15475.971 - 15581.250: 86.6085% ( 80) 00:09:26.279 15581.250 - 15686.529: 87.5000% ( 93) 00:09:26.279 15686.529 - 15791.807: 88.2094% ( 74) 00:09:26.279 15791.807 - 15897.086: 88.9187% ( 74) 00:09:26.280 15897.086 - 16002.365: 89.4651% ( 57) 00:09:26.280 16002.365 - 16107.643: 90.0498% ( 61) 00:09:26.280 16107.643 - 16212.922: 90.4812% ( 45) 00:09:26.280 16212.922 - 16318.201: 90.8167% ( 35) 00:09:26.280 16318.201 - 16423.480: 91.3535% ( 56) 00:09:26.280 16423.480 - 16528.758: 91.6890% ( 35) 00:09:26.280 16528.758 - 16634.037: 91.8520% ( 17) 00:09:26.280 16634.037 - 16739.316: 92.0629% ( 22) 00:09:26.280 16739.316 - 16844.594: 92.3984% ( 35) 00:09:26.280 16844.594 - 16949.873: 92.7722% ( 39) 00:09:26.280 16949.873 - 17055.152: 93.1269% ( 37) 00:09:26.280 17055.152 - 17160.431: 93.4720% ( 36) 00:09:26.280 17160.431 - 17265.709: 93.7117% ( 25) 00:09:26.280 17265.709 - 17370.988: 93.9321% ( 23) 00:09:26.280 17370.988 - 17476.267: 94.1526% ( 23) 00:09:26.280 17476.267 - 17581.545: 94.3539% ( 21) 00:09:26.280 17581.545 - 17686.824: 94.5840% ( 24) 00:09:26.280 17686.824 - 17792.103: 94.7949% ( 22) 00:09:26.280 17792.103 - 17897.382: 95.0058% ( 22) 00:09:26.280 17897.382 - 18002.660: 95.2837% ( 29) 00:09:26.280 18002.660 - 18107.939: 95.6863% ( 42) 00:09:26.280 18107.939 - 18213.218: 95.9356% ( 26) 00:09:26.280 18213.218 - 18318.496: 96.1369% ( 21) 00:09:26.280 18318.496 - 18423.775: 96.2807% ( 15) 00:09:26.280 18423.775 - 18529.054: 96.4053% ( 13) 00:09:26.280 18529.054 - 18634.333: 96.5203% ( 12) 00:09:26.280 18634.333 - 18739.611: 96.6929% ( 18) 00:09:26.280 18739.611 - 18844.890: 96.9613% ( 28) 00:09:26.280 18844.890 - 18950.169: 97.0955% ( 14) 00:09:26.280 18950.169 - 19055.447: 97.2009% ( 11) 00:09:26.280 19055.447 - 19160.726: 97.3160% ( 12) 00:09:26.280 19160.726 - 19266.005: 97.4310% ( 12) 00:09:26.280 19266.005 - 19371.284: 97.5460% ( 12) 00:09:26.280 19371.284 - 19476.562: 97.8911% ( 36) 00:09:26.280 19476.562 - 19581.841: 97.9582% ( 7) 00:09:26.280 19581.841 - 19687.120: 98.0157% ( 6) 00:09:26.280 19687.120 - 19792.398: 98.0732% ( 6) 00:09:26.280 19792.398 - 19897.677: 98.1020% ( 3) 00:09:26.280 19897.677 - 20002.956: 98.1308% ( 3) 00:09:26.280 20002.956 - 20108.235: 98.1595% ( 3) 00:09:26.280 20108.235 - 20213.513: 98.1883% ( 3) 00:09:26.280 20213.513 - 20318.792: 98.3129% ( 13) 00:09:26.280 20318.792 - 20424.071: 98.5238% ( 22) 00:09:26.280 20424.071 - 20529.349: 98.6388% ( 12) 00:09:26.280 20529.349 - 20634.628: 98.7347% ( 10) 00:09:26.280 20634.628 - 20739.907: 98.7730% ( 4) 00:09:26.280 27161.908 - 27372.466: 98.8305% ( 6) 00:09:26.280 27372.466 - 27583.023: 98.9360% ( 11) 00:09:26.280 27583.023 - 27793.581: 99.0510% ( 12) 00:09:26.280 27793.581 - 28004.138: 99.1660% ( 12) 00:09:26.280 28004.138 - 28214.696: 99.2811% ( 12) 00:09:26.280 28214.696 - 28425.253: 99.3865% ( 11) 00:09:26.280 34531.418 - 34741.976: 99.5015% ( 12) 00:09:26.280 34741.976 - 34952.533: 99.6262% ( 13) 00:09:26.280 34952.533 - 35163.091: 99.7412% ( 12) 00:09:26.280 35163.091 - 35373.648: 99.8658% ( 13) 00:09:26.280 35373.648 - 35584.206: 99.9904% ( 13) 00:09:26.280 35584.206 - 35794.763: 100.0000% ( 1) 00:09:26.280 00:09:26.280 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:26.280 ============================================================================== 00:09:26.280 Range in us Cumulative IO count 00:09:26.280 5842.969 - 5869.288: 0.0191% ( 2) 00:09:26.280 5869.288 - 5895.608: 0.0476% ( 3) 00:09:26.280 5895.608 - 5921.928: 0.0667% ( 2) 00:09:26.280 5921.928 - 5948.247: 0.1715% ( 11) 00:09:26.280 5948.247 - 5974.567: 0.2287% ( 6) 00:09:26.280 5974.567 - 6000.887: 0.4478% ( 23) 00:09:26.280 6000.887 - 6027.206: 0.5145% ( 7) 00:09:26.280 6027.206 - 6053.526: 0.5240% ( 1) 00:09:26.280 6053.526 - 6079.846: 0.5431% ( 2) 00:09:26.280 6079.846 - 6106.165: 0.5526% ( 1) 00:09:26.280 6106.165 - 6132.485: 0.5716% ( 2) 00:09:26.280 6132.485 - 6158.805: 0.5812% ( 1) 00:09:26.280 6158.805 - 6185.124: 0.6002% ( 2) 00:09:26.280 6185.124 - 6211.444: 0.6098% ( 1) 00:09:26.280 7790.625 - 7843.264: 0.6193% ( 1) 00:09:26.280 7843.264 - 7895.904: 0.6669% ( 5) 00:09:26.280 7895.904 - 7948.543: 0.8003% ( 14) 00:09:26.280 7948.543 - 8001.182: 0.9718% ( 18) 00:09:26.280 8001.182 - 8053.822: 1.1147% ( 15) 00:09:26.280 8053.822 - 8106.461: 1.3815% ( 28) 00:09:26.280 8106.461 - 8159.100: 1.5434% ( 17) 00:09:26.280 8159.100 - 8211.740: 1.6959% ( 16) 00:09:26.280 8211.740 - 8264.379: 1.8102% ( 12) 00:09:26.280 8264.379 - 8317.018: 1.8388% ( 3) 00:09:26.280 8317.018 - 8369.658: 1.8674% ( 3) 00:09:26.280 8369.658 - 8422.297: 1.9436% ( 8) 00:09:26.280 8422.297 - 8474.937: 2.0293% ( 9) 00:09:26.280 8474.937 - 8527.576: 2.1532% ( 13) 00:09:26.280 8527.576 - 8580.215: 2.3819% ( 24) 00:09:26.280 8580.215 - 8632.855: 2.5915% ( 22) 00:09:26.280 8632.855 - 8685.494: 2.8392% ( 26) 00:09:26.280 8685.494 - 8738.133: 3.2489% ( 43) 00:09:26.280 8738.133 - 8790.773: 3.6204% ( 39) 00:09:26.280 8790.773 - 8843.412: 4.1063% ( 51) 00:09:26.280 8843.412 - 8896.051: 4.7732% ( 70) 00:09:26.280 8896.051 - 8948.691: 5.5640% ( 83) 00:09:26.280 8948.691 - 9001.330: 6.5263% ( 101) 00:09:26.280 9001.330 - 9053.969: 7.0694% ( 57) 00:09:26.280 9053.969 - 9106.609: 7.5838% ( 54) 00:09:26.280 9106.609 - 9159.248: 8.5080% ( 97) 00:09:26.280 9159.248 - 9211.888: 9.3274% ( 86) 00:09:26.280 9211.888 - 9264.527: 9.9562% ( 66) 00:09:26.280 9264.527 - 9317.166: 11.0328% ( 113) 00:09:26.280 9317.166 - 9369.806: 12.1951% ( 122) 00:09:26.280 9369.806 - 9422.445: 13.4051% ( 127) 00:09:26.280 9422.445 - 9475.084: 15.0438% ( 172) 00:09:26.280 9475.084 - 9527.724: 16.6635% ( 170) 00:09:26.280 9527.724 - 9580.363: 18.1784% ( 159) 00:09:26.280 9580.363 - 9633.002: 19.5789% ( 147) 00:09:26.280 9633.002 - 9685.642: 21.1414% ( 164) 00:09:26.280 9685.642 - 9738.281: 22.7134% ( 165) 00:09:26.280 9738.281 - 9790.920: 23.9520% ( 130) 00:09:26.280 9790.920 - 9843.560: 25.1715% ( 128) 00:09:26.280 9843.560 - 9896.199: 26.6578% ( 156) 00:09:26.280 9896.199 - 9948.839: 28.0011% ( 141) 00:09:26.280 9948.839 - 10001.478: 29.1349% ( 119) 00:09:26.280 10001.478 - 10054.117: 30.0400% ( 95) 00:09:26.280 10054.117 - 10106.757: 31.0785% ( 109) 00:09:26.280 10106.757 - 10159.396: 31.9360% ( 90) 00:09:26.280 10159.396 - 10212.035: 32.8887% ( 100) 00:09:26.280 10212.035 - 10264.675: 33.7652% ( 92) 00:09:26.280 10264.675 - 10317.314: 35.1181% ( 142) 00:09:26.280 10317.314 - 10369.953: 35.9756% ( 90) 00:09:26.280 10369.953 - 10422.593: 36.9284% ( 100) 00:09:26.280 10422.593 - 10475.232: 38.0050% ( 113) 00:09:26.280 10475.232 - 10527.871: 38.6052% ( 63) 00:09:26.280 10527.871 - 10580.511: 39.1006% ( 52) 00:09:26.280 10580.511 - 10633.150: 39.5293% ( 45) 00:09:26.280 10633.150 - 10685.790: 40.0057% ( 50) 00:09:26.280 10685.790 - 10738.429: 40.6250% ( 65) 00:09:26.280 10738.429 - 10791.068: 41.2062% ( 61) 00:09:26.280 10791.068 - 10843.708: 41.5873% ( 40) 00:09:26.280 10843.708 - 10896.347: 42.3114% ( 76) 00:09:26.280 10896.347 - 10948.986: 42.6829% ( 39) 00:09:26.280 10948.986 - 11001.626: 42.9497% ( 28) 00:09:26.280 11001.626 - 11054.265: 43.2546% ( 32) 00:09:26.280 11054.265 - 11106.904: 43.5785% ( 34) 00:09:26.280 11106.904 - 11159.544: 43.6928% ( 12) 00:09:26.280 11159.544 - 11212.183: 43.7976% ( 11) 00:09:26.280 11212.183 - 11264.822: 43.9120% ( 12) 00:09:26.280 11264.822 - 11317.462: 44.0263% ( 12) 00:09:26.280 11317.462 - 11370.101: 44.1883% ( 17) 00:09:26.280 11370.101 - 11422.741: 44.4836% ( 31) 00:09:26.280 11422.741 - 11475.380: 44.8838% ( 42) 00:09:26.280 11475.380 - 11528.019: 45.0743% ( 20) 00:09:26.280 11528.019 - 11580.659: 45.3030% ( 24) 00:09:26.280 11580.659 - 11633.298: 45.5793% ( 29) 00:09:26.280 11633.298 - 11685.937: 45.8556% ( 29) 00:09:26.280 11685.937 - 11738.577: 46.0842% ( 24) 00:09:26.280 11738.577 - 11791.216: 46.4177% ( 35) 00:09:26.280 11791.216 - 11843.855: 46.7797% ( 38) 00:09:26.280 11843.855 - 11896.495: 47.2561% ( 50) 00:09:26.280 11896.495 - 11949.134: 47.7896% ( 56) 00:09:26.280 11949.134 - 12001.773: 48.2088% ( 44) 00:09:26.280 12001.773 - 12054.413: 48.5804% ( 39) 00:09:26.280 12054.413 - 12107.052: 49.0473% ( 49) 00:09:26.280 12107.052 - 12159.692: 49.6475% ( 63) 00:09:26.280 12159.692 - 12212.331: 50.6955% ( 110) 00:09:26.280 12212.331 - 12264.970: 51.9531% ( 132) 00:09:26.280 12264.970 - 12317.610: 53.3155% ( 143) 00:09:26.280 12317.610 - 12370.249: 55.2782% ( 206) 00:09:26.280 12370.249 - 12422.888: 57.1456% ( 196) 00:09:26.280 12422.888 - 12475.528: 58.8129% ( 175) 00:09:26.280 12475.528 - 12528.167: 60.4992% ( 177) 00:09:26.280 12528.167 - 12580.806: 62.1761% ( 176) 00:09:26.280 12580.806 - 12633.446: 63.5861% ( 148) 00:09:26.280 12633.446 - 12686.085: 64.9676% ( 145) 00:09:26.280 12686.085 - 12738.724: 66.0633% ( 115) 00:09:26.280 12738.724 - 12791.364: 67.2256% ( 122) 00:09:26.280 12791.364 - 12844.003: 68.2832% ( 111) 00:09:26.280 12844.003 - 12896.643: 69.2359% ( 100) 00:09:26.280 12896.643 - 12949.282: 70.1886% ( 100) 00:09:26.280 12949.282 - 13001.921: 70.9223% ( 77) 00:09:26.280 13001.921 - 13054.561: 71.7130% ( 83) 00:09:26.280 13054.561 - 13107.200: 72.6467% ( 98) 00:09:26.280 13107.200 - 13159.839: 73.3994% ( 79) 00:09:26.280 13159.839 - 13212.479: 74.1711% ( 81) 00:09:26.280 13212.479 - 13265.118: 74.7809% ( 64) 00:09:26.280 13265.118 - 13317.757: 75.2572% ( 50) 00:09:26.280 13317.757 - 13370.397: 75.6479% ( 41) 00:09:26.281 13370.397 - 13423.036: 75.9623% ( 33) 00:09:26.281 13423.036 - 13475.676: 76.4386% ( 50) 00:09:26.281 13475.676 - 13580.954: 77.2771% ( 88) 00:09:26.281 13580.954 - 13686.233: 78.2107% ( 98) 00:09:26.281 13686.233 - 13791.512: 79.2016% ( 104) 00:09:26.281 13791.512 - 13896.790: 79.6399% ( 46) 00:09:26.281 13896.790 - 14002.069: 80.0781% ( 46) 00:09:26.281 14002.069 - 14107.348: 80.4497% ( 39) 00:09:26.281 14107.348 - 14212.627: 80.8117% ( 38) 00:09:26.281 14212.627 - 14317.905: 81.1738% ( 38) 00:09:26.281 14317.905 - 14423.184: 81.5835% ( 43) 00:09:26.281 14423.184 - 14528.463: 82.0408% ( 48) 00:09:26.281 14528.463 - 14633.741: 82.6220% ( 61) 00:09:26.281 14633.741 - 14739.020: 83.2698% ( 68) 00:09:26.281 14739.020 - 14844.299: 83.6986% ( 45) 00:09:26.281 14844.299 - 14949.578: 83.9844% ( 30) 00:09:26.281 14949.578 - 15054.856: 84.2702% ( 30) 00:09:26.281 15054.856 - 15160.135: 84.7275% ( 48) 00:09:26.281 15160.135 - 15265.414: 85.1467% ( 44) 00:09:26.281 15265.414 - 15370.692: 85.7279% ( 61) 00:09:26.281 15370.692 - 15475.971: 86.2995% ( 60) 00:09:26.281 15475.971 - 15581.250: 86.6139% ( 33) 00:09:26.281 15581.250 - 15686.529: 86.9188% ( 32) 00:09:26.281 15686.529 - 15791.807: 87.4809% ( 59) 00:09:26.281 15791.807 - 15897.086: 87.8620% ( 40) 00:09:26.281 15897.086 - 16002.365: 88.1669% ( 32) 00:09:26.281 16002.365 - 16107.643: 88.3956% ( 24) 00:09:26.281 16107.643 - 16212.922: 88.7100% ( 33) 00:09:26.281 16212.922 - 16318.201: 89.3769% ( 70) 00:09:26.281 16318.201 - 16423.480: 89.9104% ( 56) 00:09:26.281 16423.480 - 16528.758: 90.5678% ( 69) 00:09:26.281 16528.758 - 16634.037: 91.2824% ( 75) 00:09:26.281 16634.037 - 16739.316: 92.0636% ( 82) 00:09:26.281 16739.316 - 16844.594: 92.6067% ( 57) 00:09:26.281 16844.594 - 16949.873: 93.0450% ( 46) 00:09:26.281 16949.873 - 17055.152: 93.4261% ( 40) 00:09:26.281 17055.152 - 17160.431: 93.7691% ( 36) 00:09:26.281 17160.431 - 17265.709: 94.0739% ( 32) 00:09:26.281 17265.709 - 17370.988: 94.3693% ( 31) 00:09:26.281 17370.988 - 17476.267: 94.8647% ( 52) 00:09:26.281 17476.267 - 17581.545: 95.3220% ( 48) 00:09:26.281 17581.545 - 17686.824: 95.5983% ( 29) 00:09:26.281 17686.824 - 17792.103: 95.7793% ( 19) 00:09:26.281 17792.103 - 17897.382: 95.9699% ( 20) 00:09:26.281 17897.382 - 18002.660: 96.1414% ( 18) 00:09:26.281 18002.660 - 18107.939: 96.2367% ( 10) 00:09:26.281 18107.939 - 18213.218: 96.3891% ( 16) 00:09:26.281 18213.218 - 18318.496: 96.4844% ( 10) 00:09:26.281 18318.496 - 18423.775: 96.5701% ( 9) 00:09:26.281 18423.775 - 18529.054: 96.6463% ( 8) 00:09:26.281 18529.054 - 18634.333: 96.7416% ( 10) 00:09:26.281 18634.333 - 18739.611: 96.8655% ( 13) 00:09:26.281 18739.611 - 18844.890: 96.9798% ( 12) 00:09:26.281 18844.890 - 18950.169: 97.1227% ( 15) 00:09:26.281 18950.169 - 19055.447: 97.4181% ( 31) 00:09:26.281 19055.447 - 19160.726: 97.5229% ( 11) 00:09:26.281 19160.726 - 19266.005: 97.7134% ( 20) 00:09:26.281 19266.005 - 19371.284: 98.0278% ( 33) 00:09:26.281 19371.284 - 19476.562: 98.0564% ( 3) 00:09:26.281 19476.562 - 19581.841: 98.0850% ( 3) 00:09:26.281 19581.841 - 19687.120: 98.1136% ( 3) 00:09:26.281 19687.120 - 19792.398: 98.1421% ( 3) 00:09:26.281 19792.398 - 19897.677: 98.1612% ( 2) 00:09:26.281 19897.677 - 20002.956: 98.1707% ( 1) 00:09:26.281 20108.235 - 20213.513: 98.2088% ( 4) 00:09:26.281 20213.513 - 20318.792: 98.2660% ( 6) 00:09:26.281 20318.792 - 20424.071: 98.3327% ( 7) 00:09:26.281 20424.071 - 20529.349: 98.3994% ( 7) 00:09:26.281 20529.349 - 20634.628: 98.4756% ( 8) 00:09:26.281 20634.628 - 20739.907: 98.5804% ( 11) 00:09:26.281 20739.907 - 20845.186: 98.7043% ( 13) 00:09:26.281 20845.186 - 20950.464: 98.9139% ( 22) 00:09:26.281 20950.464 - 21055.743: 99.2473% ( 35) 00:09:26.281 21055.743 - 21161.022: 99.3712% ( 13) 00:09:26.281 21161.022 - 21266.300: 99.3902% ( 2) 00:09:26.281 27583.023 - 27793.581: 99.4188% ( 3) 00:09:26.281 27793.581 - 28004.138: 99.5332% ( 12) 00:09:26.281 28004.138 - 28214.696: 99.6475% ( 12) 00:09:26.281 28214.696 - 28425.253: 99.7523% ( 11) 00:09:26.281 28425.253 - 28635.810: 99.8666% ( 12) 00:09:26.281 28635.810 - 28846.368: 99.9714% ( 11) 00:09:26.281 28846.368 - 29056.925: 100.0000% ( 3) 00:09:26.281 00:09:26.281 ************************************ 00:09:26.281 END TEST nvme_perf 00:09:26.281 ************************************ 00:09:26.281 09:35:03 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:26.281 00:09:26.281 real 0m2.613s 00:09:26.281 user 0m2.203s 00:09:26.281 sys 0m0.296s 00:09:26.281 09:35:03 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.281 09:35:03 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:26.281 09:35:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:26.281 09:35:03 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.281 09:35:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.281 09:35:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.281 ************************************ 00:09:26.281 START TEST nvme_hello_world 00:09:26.281 ************************************ 00:09:26.281 09:35:03 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:26.540 Initializing NVMe Controllers 00:09:26.540 Attached to 0000:00:10.0 00:09:26.540 Namespace ID: 1 size: 6GB 00:09:26.540 Attached to 0000:00:11.0 00:09:26.540 Namespace ID: 1 size: 5GB 00:09:26.540 Attached to 0000:00:13.0 00:09:26.540 Namespace ID: 1 size: 1GB 00:09:26.540 Attached to 0000:00:12.0 00:09:26.540 Namespace ID: 1 size: 4GB 00:09:26.540 Namespace ID: 2 size: 4GB 00:09:26.540 Namespace ID: 3 size: 4GB 00:09:26.540 Initialization complete. 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 INFO: using host memory buffer for IO 00:09:26.540 Hello world! 00:09:26.540 ************************************ 00:09:26.540 END TEST nvme_hello_world 00:09:26.540 ************************************ 00:09:26.540 00:09:26.540 real 0m0.280s 00:09:26.540 user 0m0.090s 00:09:26.540 sys 0m0.145s 00:09:26.540 09:35:04 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.540 09:35:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:26.540 09:35:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:26.540 09:35:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.540 09:35:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.540 09:35:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.540 ************************************ 00:09:26.540 START TEST nvme_sgl 00:09:26.540 ************************************ 00:09:26.540 09:35:04 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:26.798 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:26.798 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:26.798 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:26.798 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:26.798 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:26.798 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:26.798 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:26.798 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:26.798 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:26.798 NVMe Readv/Writev Request test 00:09:26.798 Attached to 0000:00:10.0 00:09:26.798 Attached to 0000:00:11.0 00:09:26.798 Attached to 0000:00:13.0 00:09:26.798 Attached to 0000:00:12.0 00:09:26.798 0000:00:10.0: build_io_request_2 test passed 00:09:26.798 0000:00:10.0: build_io_request_4 test passed 00:09:26.798 0000:00:10.0: build_io_request_5 test passed 00:09:26.798 0000:00:10.0: build_io_request_6 test passed 00:09:26.798 0000:00:10.0: build_io_request_7 test passed 00:09:26.798 0000:00:10.0: build_io_request_10 test passed 00:09:26.798 0000:00:11.0: build_io_request_2 test passed 00:09:26.798 0000:00:11.0: build_io_request_4 test passed 00:09:26.798 0000:00:11.0: build_io_request_5 test passed 00:09:26.798 0000:00:11.0: build_io_request_6 test passed 00:09:26.798 0000:00:11.0: build_io_request_7 test passed 00:09:26.798 0000:00:11.0: build_io_request_10 test passed 00:09:26.798 Cleaning up... 00:09:26.798 ************************************ 00:09:26.798 END TEST nvme_sgl 00:09:26.798 ************************************ 00:09:26.798 00:09:26.798 real 0m0.317s 00:09:26.798 user 0m0.142s 00:09:26.798 sys 0m0.130s 00:09:26.798 09:35:04 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.798 09:35:04 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:26.798 09:35:04 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:26.798 09:35:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.798 09:35:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.798 09:35:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.798 ************************************ 00:09:26.798 START TEST nvme_e2edp 00:09:26.798 ************************************ 00:09:27.056 09:35:04 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:27.056 NVMe Write/Read with End-to-End data protection test 00:09:27.056 Attached to 0000:00:10.0 00:09:27.056 Attached to 0000:00:11.0 00:09:27.056 Attached to 0000:00:13.0 00:09:27.056 Attached to 0000:00:12.0 00:09:27.056 Cleaning up... 00:09:27.314 ************************************ 00:09:27.314 END TEST nvme_e2edp 00:09:27.314 ************************************ 00:09:27.314 00:09:27.314 real 0m0.263s 00:09:27.314 user 0m0.084s 00:09:27.314 sys 0m0.131s 00:09:27.314 09:35:04 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.314 09:35:04 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:27.314 09:35:04 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:27.314 09:35:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.314 09:35:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.314 09:35:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.314 ************************************ 00:09:27.314 START TEST nvme_reserve 00:09:27.314 ************************************ 00:09:27.314 09:35:04 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:27.572 ===================================================== 00:09:27.572 NVMe Controller at PCI bus 0, device 16, function 0 00:09:27.572 ===================================================== 00:09:27.572 Reservations: Not Supported 00:09:27.572 ===================================================== 00:09:27.572 NVMe Controller at PCI bus 0, device 17, function 0 00:09:27.572 ===================================================== 00:09:27.572 Reservations: Not Supported 00:09:27.572 ===================================================== 00:09:27.572 NVMe Controller at PCI bus 0, device 19, function 0 00:09:27.572 ===================================================== 00:09:27.572 Reservations: Not Supported 00:09:27.572 ===================================================== 00:09:27.572 NVMe Controller at PCI bus 0, device 18, function 0 00:09:27.572 ===================================================== 00:09:27.572 Reservations: Not Supported 00:09:27.572 Reservation test passed 00:09:27.572 ************************************ 00:09:27.572 END TEST nvme_reserve 00:09:27.572 ************************************ 00:09:27.572 00:09:27.572 real 0m0.266s 00:09:27.572 user 0m0.085s 00:09:27.572 sys 0m0.135s 00:09:27.572 09:35:05 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.572 09:35:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:27.572 09:35:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:27.572 09:35:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.572 09:35:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.572 09:35:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.572 ************************************ 00:09:27.572 START TEST nvme_err_injection 00:09:27.572 ************************************ 00:09:27.572 09:35:05 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:27.828 NVMe Error Injection test 00:09:27.828 Attached to 0000:00:10.0 00:09:27.828 Attached to 0000:00:11.0 00:09:27.828 Attached to 0000:00:13.0 00:09:27.828 Attached to 0000:00:12.0 00:09:27.828 0000:00:10.0: get features failed as expected 00:09:27.828 0000:00:11.0: get features failed as expected 00:09:27.828 0000:00:13.0: get features failed as expected 00:09:27.828 0000:00:12.0: get features failed as expected 00:09:27.828 0000:00:10.0: get features successfully as expected 00:09:27.828 0000:00:11.0: get features successfully as expected 00:09:27.828 0000:00:13.0: get features successfully as expected 00:09:27.828 0000:00:12.0: get features successfully as expected 00:09:27.828 0000:00:10.0: read failed as expected 00:09:27.828 0000:00:11.0: read failed as expected 00:09:27.829 0000:00:13.0: read failed as expected 00:09:27.829 0000:00:12.0: read failed as expected 00:09:27.829 0000:00:10.0: read successfully as expected 00:09:27.829 0000:00:11.0: read successfully as expected 00:09:27.829 0000:00:13.0: read successfully as expected 00:09:27.829 0000:00:12.0: read successfully as expected 00:09:27.829 Cleaning up... 00:09:27.829 ************************************ 00:09:27.829 END TEST nvme_err_injection 00:09:27.829 ************************************ 00:09:27.829 00:09:27.829 real 0m0.298s 00:09:27.829 user 0m0.095s 00:09:27.829 sys 0m0.147s 00:09:27.829 09:35:05 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.829 09:35:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:27.829 09:35:05 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:27.829 09:35:05 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:27.829 09:35:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.829 09:35:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.829 ************************************ 00:09:27.829 START TEST nvme_overhead 00:09:27.829 ************************************ 00:09:27.829 09:35:05 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:29.208 Initializing NVMe Controllers 00:09:29.208 Attached to 0000:00:10.0 00:09:29.208 Attached to 0000:00:11.0 00:09:29.208 Attached to 0000:00:13.0 00:09:29.208 Attached to 0000:00:12.0 00:09:29.208 Initialization complete. Launching workers. 00:09:29.208 submit (in ns) avg, min, max = 13770.9, 11518.1, 120585.5 00:09:29.208 complete (in ns) avg, min, max = 8381.0, 7716.5, 104516.5 00:09:29.208 00:09:29.208 Submit histogram 00:09:29.208 ================ 00:09:29.208 Range in us Cumulative Count 00:09:29.208 11.515 - 11.566: 0.0138% ( 1) 00:09:29.208 11.669 - 11.720: 0.0277% ( 1) 00:09:29.208 12.080 - 12.132: 0.0415% ( 1) 00:09:29.208 12.492 - 12.543: 0.0553% ( 1) 00:09:29.208 12.543 - 12.594: 0.0968% ( 3) 00:09:29.208 12.594 - 12.646: 0.1797% ( 6) 00:09:29.208 12.646 - 12.697: 0.5807% ( 29) 00:09:29.208 12.697 - 12.749: 1.2166% ( 46) 00:09:29.208 12.749 - 12.800: 2.8342% ( 117) 00:09:29.208 12.800 - 12.851: 5.1846% ( 170) 00:09:29.208 12.851 - 12.903: 9.0419% ( 279) 00:09:29.208 12.903 - 12.954: 14.3785% ( 386) 00:09:29.208 12.954 - 13.006: 19.7981% ( 392) 00:09:29.208 13.006 - 13.057: 26.4759% ( 483) 00:09:29.208 13.057 - 13.108: 34.1076% ( 552) 00:09:29.208 13.108 - 13.160: 40.8682% ( 489) 00:09:29.208 13.160 - 13.263: 54.5279% ( 988) 00:09:29.208 13.263 - 13.365: 65.2426% ( 775) 00:09:29.208 13.365 - 13.468: 74.2569% ( 652) 00:09:29.208 13.468 - 13.571: 79.9115% ( 409) 00:09:29.208 13.571 - 13.674: 84.0039% ( 296) 00:09:29.208 13.674 - 13.777: 86.7552% ( 199) 00:09:29.208 13.777 - 13.880: 89.1055% ( 170) 00:09:29.208 13.880 - 13.982: 90.4742% ( 99) 00:09:29.208 13.982 - 14.085: 91.4697% ( 72) 00:09:29.208 14.085 - 14.188: 92.0227% ( 40) 00:09:29.208 14.188 - 14.291: 92.4236% ( 29) 00:09:29.208 14.291 - 14.394: 92.6310% ( 15) 00:09:29.208 14.394 - 14.496: 92.7554% ( 9) 00:09:29.208 14.496 - 14.599: 92.8384% ( 6) 00:09:29.208 14.599 - 14.702: 92.8660% ( 2) 00:09:29.208 14.702 - 14.805: 92.9352% ( 5) 00:09:29.208 14.805 - 14.908: 92.9490% ( 1) 00:09:29.208 14.908 - 15.010: 93.0043% ( 4) 00:09:29.208 15.113 - 15.216: 93.0181% ( 1) 00:09:29.208 15.319 - 15.422: 93.0458% ( 2) 00:09:29.208 15.422 - 15.524: 93.0734% ( 2) 00:09:29.208 15.627 - 15.730: 93.1425% ( 5) 00:09:29.208 15.730 - 15.833: 93.1840% ( 3) 00:09:29.208 15.833 - 15.936: 93.1978% ( 1) 00:09:29.208 15.936 - 16.039: 93.2255% ( 2) 00:09:29.208 16.039 - 16.141: 93.2946% ( 5) 00:09:29.208 16.141 - 16.244: 93.3223% ( 2) 00:09:29.208 16.244 - 16.347: 93.3499% ( 2) 00:09:29.208 16.347 - 16.450: 93.3637% ( 1) 00:09:29.208 16.450 - 16.553: 93.4052% ( 3) 00:09:29.208 16.553 - 16.655: 93.4744% ( 5) 00:09:29.208 16.655 - 16.758: 93.5850% ( 8) 00:09:29.208 16.758 - 16.861: 93.7232% ( 10) 00:09:29.208 16.861 - 16.964: 93.8615% ( 10) 00:09:29.208 16.964 - 17.067: 94.1103% ( 18) 00:09:29.208 17.067 - 17.169: 94.3592% ( 18) 00:09:29.208 17.169 - 17.272: 94.5527% ( 14) 00:09:29.208 17.272 - 17.375: 94.7463% ( 14) 00:09:29.208 17.375 - 17.478: 94.9537% ( 15) 00:09:29.208 17.478 - 17.581: 95.0505% ( 7) 00:09:29.208 17.581 - 17.684: 95.2440% ( 14) 00:09:29.209 17.684 - 17.786: 95.3823% ( 10) 00:09:29.209 17.786 - 17.889: 95.4929% ( 8) 00:09:29.209 17.889 - 17.992: 95.6311% ( 10) 00:09:29.209 17.992 - 18.095: 95.7832% ( 11) 00:09:29.209 18.095 - 18.198: 95.8800% ( 7) 00:09:29.209 18.198 - 18.300: 96.0736% ( 14) 00:09:29.209 18.300 - 18.403: 96.1980% ( 9) 00:09:29.209 18.403 - 18.506: 96.3501% ( 11) 00:09:29.209 18.506 - 18.609: 96.4883% ( 10) 00:09:29.209 18.609 - 18.712: 96.6957% ( 15) 00:09:29.209 18.712 - 18.814: 96.8616% ( 12) 00:09:29.209 18.814 - 18.917: 96.9722% ( 8) 00:09:29.209 18.917 - 19.020: 97.0690% ( 7) 00:09:29.209 19.020 - 19.123: 97.1381% ( 5) 00:09:29.209 19.123 - 19.226: 97.1934% ( 4) 00:09:29.209 19.226 - 19.329: 97.3178% ( 9) 00:09:29.209 19.329 - 19.431: 97.4008% ( 6) 00:09:29.209 19.431 - 19.534: 97.5252% ( 9) 00:09:29.209 19.534 - 19.637: 97.6082% ( 6) 00:09:29.209 19.637 - 19.740: 97.6911% ( 6) 00:09:29.209 19.740 - 19.843: 97.7741% ( 6) 00:09:29.209 19.843 - 19.945: 97.8570% ( 6) 00:09:29.209 19.945 - 20.048: 97.9815% ( 9) 00:09:29.209 20.048 - 20.151: 98.0783% ( 7) 00:09:29.209 20.151 - 20.254: 98.1197% ( 3) 00:09:29.209 20.254 - 20.357: 98.2165% ( 7) 00:09:29.209 20.357 - 20.459: 98.2580% ( 3) 00:09:29.209 20.459 - 20.562: 98.3133% ( 4) 00:09:29.209 20.562 - 20.665: 98.3962% ( 6) 00:09:29.209 20.665 - 20.768: 98.4101% ( 1) 00:09:29.209 20.768 - 20.871: 98.4377% ( 2) 00:09:29.209 20.871 - 20.973: 98.4654% ( 2) 00:09:29.209 20.973 - 21.076: 98.4792% ( 1) 00:09:29.209 21.076 - 21.179: 98.5068% ( 2) 00:09:29.209 21.179 - 21.282: 98.5345% ( 2) 00:09:29.209 21.282 - 21.385: 98.5760% ( 3) 00:09:29.209 21.385 - 21.488: 98.6036% ( 2) 00:09:29.209 21.488 - 21.590: 98.6174% ( 1) 00:09:29.209 21.693 - 21.796: 98.6313% ( 1) 00:09:29.209 21.796 - 21.899: 98.6589% ( 2) 00:09:29.209 21.899 - 22.002: 98.6727% ( 1) 00:09:29.209 22.002 - 22.104: 98.7142% ( 3) 00:09:29.209 22.104 - 22.207: 98.7281% ( 1) 00:09:29.209 22.207 - 22.310: 98.7557% ( 2) 00:09:29.209 22.618 - 22.721: 98.7834% ( 2) 00:09:29.209 22.721 - 22.824: 98.7972% ( 1) 00:09:29.209 22.927 - 23.030: 98.8248% ( 2) 00:09:29.209 23.030 - 23.133: 98.8387% ( 1) 00:09:29.209 23.133 - 23.235: 98.8801% ( 3) 00:09:29.209 23.235 - 23.338: 98.9078% ( 2) 00:09:29.209 23.441 - 23.544: 98.9354% ( 2) 00:09:29.209 23.544 - 23.647: 98.9493% ( 1) 00:09:29.209 23.749 - 23.852: 98.9631% ( 1) 00:09:29.209 24.058 - 24.161: 98.9769% ( 1) 00:09:29.209 24.366 - 24.469: 98.9907% ( 1) 00:09:29.209 24.572 - 24.675: 99.0046% ( 1) 00:09:29.209 24.675 - 24.778: 99.0322% ( 2) 00:09:29.209 24.983 - 25.086: 99.0460% ( 1) 00:09:29.209 25.189 - 25.292: 99.1013% ( 4) 00:09:29.209 25.292 - 25.394: 99.1152% ( 1) 00:09:29.209 25.394 - 25.497: 99.1566% ( 3) 00:09:29.209 25.497 - 25.600: 99.1843% ( 2) 00:09:29.209 25.600 - 25.703: 99.2258% ( 3) 00:09:29.209 25.703 - 25.806: 99.2534% ( 2) 00:09:29.209 25.806 - 25.908: 99.3364% ( 6) 00:09:29.209 25.908 - 26.011: 99.3779% ( 3) 00:09:29.209 26.011 - 26.114: 99.4055% ( 2) 00:09:29.209 26.114 - 26.217: 99.4608% ( 4) 00:09:29.209 26.217 - 26.320: 99.4885% ( 2) 00:09:29.209 26.320 - 26.525: 99.5161% ( 2) 00:09:29.209 26.525 - 26.731: 99.5438% ( 2) 00:09:29.209 26.731 - 26.937: 99.5714% ( 2) 00:09:29.209 26.937 - 27.142: 99.5852% ( 1) 00:09:29.209 27.759 - 27.965: 99.5991% ( 1) 00:09:29.209 28.170 - 28.376: 99.6129% ( 1) 00:09:29.209 28.376 - 28.582: 99.6267% ( 1) 00:09:29.209 28.787 - 28.993: 99.6405% ( 1) 00:09:29.209 28.993 - 29.198: 99.6544% ( 1) 00:09:29.209 29.198 - 29.404: 99.6682% ( 1) 00:09:29.209 29.404 - 29.610: 99.7235% ( 4) 00:09:29.209 29.610 - 29.815: 99.7373% ( 1) 00:09:29.209 29.815 - 30.021: 99.7788% ( 3) 00:09:29.209 30.021 - 30.227: 99.8064% ( 2) 00:09:29.209 30.227 - 30.432: 99.8203% ( 1) 00:09:29.209 31.049 - 31.255: 99.8479% ( 2) 00:09:29.209 31.666 - 31.871: 99.8617% ( 1) 00:09:29.209 32.900 - 33.105: 99.8756% ( 1) 00:09:29.209 33.722 - 33.928: 99.8894% ( 1) 00:09:29.209 40.508 - 40.713: 99.9032% ( 1) 00:09:29.209 45.854 - 46.059: 99.9170% ( 1) 00:09:29.209 48.733 - 48.938: 99.9309% ( 1) 00:09:29.209 87.184 - 87.595: 99.9447% ( 1) 00:09:29.209 106.101 - 106.924: 99.9585% ( 1) 00:09:29.209 108.569 - 109.391: 99.9723% ( 1) 00:09:29.209 120.084 - 120.906: 100.0000% ( 2) 00:09:29.209 00:09:29.209 Complete histogram 00:09:29.210 ================== 00:09:29.210 Range in us Cumulative Count 00:09:29.210 7.711 - 7.762: 0.0415% ( 3) 00:09:29.210 7.762 - 7.814: 1.7282% ( 122) 00:09:29.210 7.814 - 7.865: 10.1341% ( 608) 00:09:29.210 7.865 - 7.916: 26.0058% ( 1148) 00:09:29.210 7.916 - 7.968: 41.9605% ( 1154) 00:09:29.210 7.968 - 8.019: 55.6062% ( 987) 00:09:29.210 8.019 - 8.071: 66.0583% ( 756) 00:09:29.210 8.071 - 8.122: 73.0126% ( 503) 00:09:29.210 8.122 - 8.173: 77.7824% ( 345) 00:09:29.210 8.173 - 8.225: 80.8102% ( 219) 00:09:29.210 8.225 - 8.276: 83.0776% ( 164) 00:09:29.210 8.276 - 8.328: 85.5385% ( 178) 00:09:29.210 8.328 - 8.379: 87.2943% ( 127) 00:09:29.210 8.379 - 8.431: 88.7875% ( 108) 00:09:29.210 8.431 - 8.482: 90.0041% ( 88) 00:09:29.210 8.482 - 8.533: 91.4697% ( 106) 00:09:29.210 8.533 - 8.585: 92.6586% ( 86) 00:09:29.210 8.585 - 8.636: 93.5158% ( 62) 00:09:29.210 8.636 - 8.688: 94.2901% ( 56) 00:09:29.210 8.688 - 8.739: 94.8846% ( 43) 00:09:29.210 8.739 - 8.790: 95.3270% ( 32) 00:09:29.210 8.790 - 8.842: 95.6035% ( 20) 00:09:29.210 8.842 - 8.893: 95.9076% ( 22) 00:09:29.210 8.893 - 8.945: 96.0736% ( 12) 00:09:29.210 8.945 - 8.996: 96.1980% ( 9) 00:09:29.210 8.996 - 9.047: 96.2948% ( 7) 00:09:29.210 9.047 - 9.099: 96.3915% ( 7) 00:09:29.210 9.099 - 9.150: 96.4883% ( 7) 00:09:29.210 9.150 - 9.202: 96.5298% ( 3) 00:09:29.210 9.202 - 9.253: 96.5713% ( 3) 00:09:29.210 9.253 - 9.304: 96.5989% ( 2) 00:09:29.210 9.304 - 9.356: 96.6266% ( 2) 00:09:29.210 9.356 - 9.407: 96.6404% ( 1) 00:09:29.210 9.510 - 9.561: 96.6680% ( 2) 00:09:29.210 9.613 - 9.664: 96.6819% ( 1) 00:09:29.210 9.716 - 9.767: 96.6957% ( 1) 00:09:29.210 9.767 - 9.818: 96.7095% ( 1) 00:09:29.210 9.818 - 9.870: 96.7234% ( 1) 00:09:29.210 9.921 - 9.973: 96.7372% ( 1) 00:09:29.210 10.230 - 10.281: 96.7510% ( 1) 00:09:29.210 10.487 - 10.538: 96.7648% ( 1) 00:09:29.210 10.590 - 10.641: 96.7787% ( 1) 00:09:29.210 10.744 - 10.795: 96.7925% ( 1) 00:09:29.210 10.847 - 10.898: 96.8063% ( 1) 00:09:29.210 11.052 - 11.104: 96.8201% ( 1) 00:09:29.210 11.104 - 11.155: 96.8340% ( 1) 00:09:29.210 11.258 - 11.309: 96.8478% ( 1) 00:09:29.210 11.309 - 11.361: 96.8754% ( 2) 00:09:29.210 11.463 - 11.515: 96.8893% ( 1) 00:09:29.210 11.515 - 11.566: 96.9031% ( 1) 00:09:29.210 11.566 - 11.618: 96.9169% ( 1) 00:09:29.210 11.618 - 11.669: 96.9307% ( 1) 00:09:29.210 11.669 - 11.720: 96.9584% ( 2) 00:09:29.210 11.720 - 11.772: 96.9860% ( 2) 00:09:29.210 11.772 - 11.823: 96.9999% ( 1) 00:09:29.210 11.823 - 11.875: 97.0137% ( 1) 00:09:29.210 11.875 - 11.926: 97.0275% ( 1) 00:09:29.210 11.926 - 11.978: 97.0413% ( 1) 00:09:29.210 12.286 - 12.337: 97.0690% ( 2) 00:09:29.210 12.337 - 12.389: 97.0828% ( 1) 00:09:29.210 12.697 - 12.749: 97.0966% ( 1) 00:09:29.210 12.749 - 12.800: 97.1105% ( 1) 00:09:29.210 12.800 - 12.851: 97.1381% ( 2) 00:09:29.210 12.903 - 12.954: 97.1519% ( 1) 00:09:29.210 12.954 - 13.006: 97.1658% ( 1) 00:09:29.210 13.057 - 13.108: 97.2072% ( 3) 00:09:29.210 13.108 - 13.160: 97.2487% ( 3) 00:09:29.210 13.160 - 13.263: 97.3455% ( 7) 00:09:29.210 13.263 - 13.365: 97.4976% ( 11) 00:09:29.210 13.365 - 13.468: 97.6220% ( 9) 00:09:29.210 13.468 - 13.571: 97.7326% ( 8) 00:09:29.210 13.571 - 13.674: 97.8432% ( 8) 00:09:29.210 13.674 - 13.777: 97.9676% ( 9) 00:09:29.210 13.777 - 13.880: 98.0230% ( 4) 00:09:29.210 13.880 - 13.982: 98.1612% ( 10) 00:09:29.210 13.982 - 14.085: 98.2165% ( 4) 00:09:29.210 14.085 - 14.188: 98.2718% ( 4) 00:09:29.210 14.188 - 14.291: 98.3271% ( 4) 00:09:29.210 14.291 - 14.394: 98.4239% ( 7) 00:09:29.210 14.394 - 14.496: 98.4515% ( 2) 00:09:29.210 14.496 - 14.599: 98.4930% ( 3) 00:09:29.210 14.599 - 14.702: 98.5621% ( 5) 00:09:29.210 14.702 - 14.805: 98.5898% ( 2) 00:09:29.210 14.908 - 15.010: 98.6589% ( 5) 00:09:29.210 15.113 - 15.216: 98.6866% ( 2) 00:09:29.210 15.627 - 15.730: 98.7281% ( 3) 00:09:29.210 15.730 - 15.833: 98.7419% ( 1) 00:09:29.210 15.833 - 15.936: 98.7695% ( 2) 00:09:29.210 16.244 - 16.347: 98.7834% ( 1) 00:09:29.210 16.553 - 16.655: 98.7972% ( 1) 00:09:29.210 17.169 - 17.272: 98.8110% ( 1) 00:09:29.210 17.272 - 17.375: 98.8248% ( 1) 00:09:29.210 17.375 - 17.478: 98.8387% ( 1) 00:09:29.210 17.786 - 17.889: 98.8525% ( 1) 00:09:29.210 17.889 - 17.992: 98.8663% ( 1) 00:09:29.210 18.506 - 18.609: 98.8801% ( 1) 00:09:29.210 19.020 - 19.123: 98.9078% ( 2) 00:09:29.210 19.226 - 19.329: 98.9216% ( 1) 00:09:29.210 19.637 - 19.740: 98.9493% ( 2) 00:09:29.210 19.843 - 19.945: 98.9631% ( 1) 00:09:29.210 20.048 - 20.151: 99.0046% ( 3) 00:09:29.210 20.151 - 20.254: 99.0737% ( 5) 00:09:29.210 20.254 - 20.357: 99.1428% ( 5) 00:09:29.210 20.357 - 20.459: 99.1705% ( 2) 00:09:29.211 20.459 - 20.562: 99.2396% ( 5) 00:09:29.211 20.562 - 20.665: 99.2811% ( 3) 00:09:29.211 20.665 - 20.768: 99.3087% ( 2) 00:09:29.211 20.768 - 20.871: 99.3502% ( 3) 00:09:29.211 21.076 - 21.179: 99.3640% ( 1) 00:09:29.211 21.385 - 21.488: 99.3779% ( 1) 00:09:29.211 22.104 - 22.207: 99.3917% ( 1) 00:09:29.211 23.133 - 23.235: 99.4055% ( 1) 00:09:29.211 23.955 - 24.058: 99.4193% ( 1) 00:09:29.211 24.058 - 24.161: 99.4470% ( 2) 00:09:29.211 24.161 - 24.263: 99.5023% ( 4) 00:09:29.211 24.263 - 24.366: 99.5161% ( 1) 00:09:29.211 24.366 - 24.469: 99.5576% ( 3) 00:09:29.211 24.469 - 24.572: 99.5852% ( 2) 00:09:29.211 24.572 - 24.675: 99.6682% ( 6) 00:09:29.211 24.675 - 24.778: 99.6820% ( 1) 00:09:29.211 24.778 - 24.880: 99.7097% ( 2) 00:09:29.211 24.880 - 24.983: 99.7373% ( 2) 00:09:29.211 24.983 - 25.086: 99.7650% ( 2) 00:09:29.211 25.086 - 25.189: 99.7788% ( 1) 00:09:29.211 25.189 - 25.292: 99.7926% ( 1) 00:09:29.211 25.394 - 25.497: 99.8064% ( 1) 00:09:29.211 25.497 - 25.600: 99.8203% ( 1) 00:09:29.211 27.553 - 27.759: 99.8479% ( 2) 00:09:29.211 29.815 - 30.021: 99.8617% ( 1) 00:09:29.211 30.227 - 30.432: 99.8756% ( 1) 00:09:29.211 30.432 - 30.638: 99.8894% ( 1) 00:09:29.211 30.843 - 31.049: 99.9032% ( 1) 00:09:29.211 31.871 - 32.077: 99.9170% ( 1) 00:09:29.211 32.694 - 32.900: 99.9309% ( 1) 00:09:29.211 35.161 - 35.367: 99.9447% ( 1) 00:09:29.211 39.068 - 39.274: 99.9585% ( 1) 00:09:29.211 75.258 - 75.669: 99.9723% ( 1) 00:09:29.211 100.755 - 101.166: 99.9862% ( 1) 00:09:29.211 104.456 - 104.867: 100.0000% ( 1) 00:09:29.211 00:09:29.211 ************************************ 00:09:29.211 END TEST nvme_overhead 00:09:29.211 ************************************ 00:09:29.211 00:09:29.211 real 0m1.275s 00:09:29.211 user 0m1.089s 00:09:29.211 sys 0m0.130s 00:09:29.211 09:35:06 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.211 09:35:06 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:29.211 09:35:06 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:29.211 09:35:06 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:29.211 09:35:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.211 09:35:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.211 ************************************ 00:09:29.211 START TEST nvme_arbitration 00:09:29.211 ************************************ 00:09:29.211 09:35:06 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:32.544 Initializing NVMe Controllers 00:09:32.544 Attached to 0000:00:10.0 00:09:32.544 Attached to 0000:00:11.0 00:09:32.544 Attached to 0000:00:13.0 00:09:32.544 Attached to 0000:00:12.0 00:09:32.544 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:32.544 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:32.544 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:32.544 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:32.544 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:32.544 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:32.544 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:32.544 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:32.544 Initialization complete. Launching workers. 00:09:32.544 Starting thread on core 1 with urgent priority queue 00:09:32.544 Starting thread on core 2 with urgent priority queue 00:09:32.544 Starting thread on core 3 with urgent priority queue 00:09:32.544 Starting thread on core 0 with urgent priority queue 00:09:32.544 QEMU NVMe Ctrl (12340 ) core 0: 4458.67 IO/s 22.43 secs/100000 ios 00:09:32.544 QEMU NVMe Ctrl (12342 ) core 0: 4458.67 IO/s 22.43 secs/100000 ios 00:09:32.544 QEMU NVMe Ctrl (12341 ) core 1: 4202.67 IO/s 23.79 secs/100000 ios 00:09:32.544 QEMU NVMe Ctrl (12342 ) core 1: 4202.67 IO/s 23.79 secs/100000 ios 00:09:32.544 QEMU NVMe Ctrl (12343 ) core 2: 4650.67 IO/s 21.50 secs/100000 ios 00:09:32.544 QEMU NVMe Ctrl (12342 ) core 3: 4480.00 IO/s 22.32 secs/100000 ios 00:09:32.544 ======================================================== 00:09:32.544 00:09:32.544 ************************************ 00:09:32.544 END TEST nvme_arbitration 00:09:32.544 ************************************ 00:09:32.544 00:09:32.544 real 0m3.308s 00:09:32.544 user 0m9.057s 00:09:32.544 sys 0m0.168s 00:09:32.544 09:35:10 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.544 09:35:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:32.544 09:35:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:32.544 09:35:10 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.544 09:35:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.544 09:35:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.802 ************************************ 00:09:32.802 START TEST nvme_single_aen 00:09:32.802 ************************************ 00:09:32.802 09:35:10 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:32.802 Asynchronous Event Request test 00:09:32.802 Attached to 0000:00:10.0 00:09:32.802 Attached to 0000:00:11.0 00:09:32.802 Attached to 0000:00:13.0 00:09:32.802 Attached to 0000:00:12.0 00:09:32.802 Reset controller to setup AER completions for this process 00:09:32.802 Registering asynchronous event callbacks... 00:09:32.802 Getting orig temperature thresholds of all controllers 00:09:32.802 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:32.802 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:32.802 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:32.802 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:32.802 Setting all controllers temperature threshold low to trigger AER 00:09:32.802 Waiting for all controllers temperature threshold to be set lower 00:09:32.802 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:32.803 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:32.803 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:32.803 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:32.803 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:32.803 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:32.803 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:32.803 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:32.803 Waiting for all controllers to trigger AER and reset threshold 00:09:32.803 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.803 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.803 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.803 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.803 Cleaning up... 00:09:33.061 ************************************ 00:09:33.061 END TEST nvme_single_aen 00:09:33.061 ************************************ 00:09:33.061 00:09:33.061 real 0m0.263s 00:09:33.061 user 0m0.086s 00:09:33.061 sys 0m0.131s 00:09:33.061 09:35:10 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.061 09:35:10 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:33.061 09:35:10 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:33.061 09:35:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.061 09:35:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.061 09:35:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.061 ************************************ 00:09:33.061 START TEST nvme_doorbell_aers 00:09:33.061 ************************************ 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:33.061 09:35:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:33.062 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.062 09:35:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:33.320 [2024-07-24 09:35:11.074287] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:09:43.293 Executing: test_write_invalid_db 00:09:43.293 Waiting for AER completion... 00:09:43.294 Failure: test_write_invalid_db 00:09:43.294 00:09:43.294 Executing: test_invalid_db_write_overflow_sq 00:09:43.294 Waiting for AER completion... 00:09:43.294 Failure: test_invalid_db_write_overflow_sq 00:09:43.294 00:09:43.294 Executing: test_invalid_db_write_overflow_cq 00:09:43.294 Waiting for AER completion... 00:09:43.294 Failure: test_invalid_db_write_overflow_cq 00:09:43.294 00:09:43.294 09:35:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:43.294 09:35:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:43.552 [2024-07-24 09:35:21.148909] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:09:53.525 Executing: test_write_invalid_db 00:09:53.525 Waiting for AER completion... 00:09:53.525 Failure: test_write_invalid_db 00:09:53.525 00:09:53.525 Executing: test_invalid_db_write_overflow_sq 00:09:53.525 Waiting for AER completion... 00:09:53.525 Failure: test_invalid_db_write_overflow_sq 00:09:53.525 00:09:53.525 Executing: test_invalid_db_write_overflow_cq 00:09:53.525 Waiting for AER completion... 00:09:53.525 Failure: test_invalid_db_write_overflow_cq 00:09:53.525 00:09:53.525 09:35:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:53.525 09:35:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:53.525 [2024-07-24 09:35:31.185820] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:03.557 Executing: test_write_invalid_db 00:10:03.557 Waiting for AER completion... 00:10:03.557 Failure: test_write_invalid_db 00:10:03.557 00:10:03.557 Executing: test_invalid_db_write_overflow_sq 00:10:03.557 Waiting for AER completion... 00:10:03.557 Failure: test_invalid_db_write_overflow_sq 00:10:03.557 00:10:03.557 Executing: test_invalid_db_write_overflow_cq 00:10:03.557 Waiting for AER completion... 00:10:03.557 Failure: test_invalid_db_write_overflow_cq 00:10:03.557 00:10:03.557 09:35:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:03.557 09:35:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:03.557 [2024-07-24 09:35:41.231239] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 Executing: test_write_invalid_db 00:10:13.532 Waiting for AER completion... 00:10:13.532 Failure: test_write_invalid_db 00:10:13.532 00:10:13.532 Executing: test_invalid_db_write_overflow_sq 00:10:13.532 Waiting for AER completion... 00:10:13.532 Failure: test_invalid_db_write_overflow_sq 00:10:13.532 00:10:13.532 Executing: test_invalid_db_write_overflow_cq 00:10:13.532 Waiting for AER completion... 00:10:13.532 Failure: test_invalid_db_write_overflow_cq 00:10:13.532 00:10:13.532 00:10:13.532 real 0m40.307s 00:10:13.532 user 0m29.513s 00:10:13.532 sys 0m10.422s 00:10:13.532 09:35:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.532 09:35:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:13.532 ************************************ 00:10:13.532 END TEST nvme_doorbell_aers 00:10:13.532 ************************************ 00:10:13.532 09:35:51 nvme -- nvme/nvme.sh@97 -- # uname 00:10:13.532 09:35:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:13.532 09:35:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:13.532 09:35:51 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:13.532 09:35:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.532 09:35:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.532 ************************************ 00:10:13.532 START TEST nvme_multi_aen 00:10:13.532 ************************************ 00:10:13.532 09:35:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:13.532 [2024-07-24 09:35:51.323227] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.323321] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.323344] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.325002] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.325044] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.325059] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.326404] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.326495] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.326551] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.328123] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.532 [2024-07-24 09:35:51.328300] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.533 [2024-07-24 09:35:51.328409] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80054) is not found. Dropping the request. 00:10:13.533 Child process pid: 80575 00:10:13.791 [Child] Asynchronous Event Request test 00:10:13.791 [Child] Attached to 0000:00:10.0 00:10:13.791 [Child] Attached to 0000:00:11.0 00:10:13.791 [Child] Attached to 0000:00:13.0 00:10:13.791 [Child] Attached to 0000:00:12.0 00:10:13.791 [Child] Registering asynchronous event callbacks... 00:10:13.791 [Child] Getting orig temperature thresholds of all controllers 00:10:13.791 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.791 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.791 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.791 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.791 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:13.791 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.791 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.791 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.791 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.791 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.791 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.791 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.791 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.791 [Child] Cleaning up... 00:10:14.050 Asynchronous Event Request test 00:10:14.050 Attached to 0000:00:10.0 00:10:14.050 Attached to 0000:00:11.0 00:10:14.050 Attached to 0000:00:13.0 00:10:14.050 Attached to 0000:00:12.0 00:10:14.050 Reset controller to setup AER completions for this process 00:10:14.050 Registering asynchronous event callbacks... 00:10:14.050 Getting orig temperature thresholds of all controllers 00:10:14.050 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.050 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.050 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.050 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.050 Setting all controllers temperature threshold low to trigger AER 00:10:14.050 Waiting for all controllers temperature threshold to be set lower 00:10:14.050 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.050 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:14.050 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.050 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:14.050 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.050 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:14.050 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.050 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:14.050 Waiting for all controllers to trigger AER and reset threshold 00:10:14.050 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.050 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.050 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.050 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.050 Cleaning up... 00:10:14.050 00:10:14.050 real 0m0.561s 00:10:14.050 user 0m0.168s 00:10:14.050 sys 0m0.285s 00:10:14.050 ************************************ 00:10:14.050 END TEST nvme_multi_aen 00:10:14.050 ************************************ 00:10:14.050 09:35:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.050 09:35:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:14.050 09:35:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:14.050 09:35:51 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:14.050 09:35:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.050 09:35:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.050 ************************************ 00:10:14.050 START TEST nvme_startup 00:10:14.050 ************************************ 00:10:14.050 09:35:51 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:14.309 Initializing NVMe Controllers 00:10:14.309 Attached to 0000:00:10.0 00:10:14.309 Attached to 0000:00:11.0 00:10:14.309 Attached to 0000:00:13.0 00:10:14.309 Attached to 0000:00:12.0 00:10:14.309 Initialization complete. 00:10:14.309 Time used:189573.797 (us). 00:10:14.309 00:10:14.309 real 0m0.277s 00:10:14.309 user 0m0.091s 00:10:14.309 sys 0m0.137s 00:10:14.309 ************************************ 00:10:14.309 END TEST nvme_startup 00:10:14.309 ************************************ 00:10:14.309 09:35:51 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.309 09:35:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:14.309 09:35:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:14.309 09:35:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.309 09:35:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.309 09:35:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.309 ************************************ 00:10:14.309 START TEST nvme_multi_secondary 00:10:14.309 ************************************ 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=80631 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=80632 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:14.309 09:35:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:18.496 Initializing NVMe Controllers 00:10:18.496 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:18.496 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:18.496 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:18.496 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:18.496 Initialization complete. Launching workers. 00:10:18.496 ======================================================== 00:10:18.496 Latency(us) 00:10:18.496 Device Information : IOPS MiB/s Average min max 00:10:18.496 PCIE (0000:00:10.0) NSID 1 from core 2: 3290.58 12.85 4860.63 1293.01 11220.30 00:10:18.496 PCIE (0000:00:11.0) NSID 1 from core 2: 3290.58 12.85 4861.99 1285.52 10766.33 00:10:18.496 PCIE (0000:00:13.0) NSID 1 from core 2: 3290.58 12.85 4862.00 1293.95 10751.83 00:10:18.496 PCIE (0000:00:12.0) NSID 1 from core 2: 3290.58 12.85 4866.72 1242.20 11444.01 00:10:18.496 PCIE (0000:00:12.0) NSID 2 from core 2: 3290.58 12.85 4861.94 1155.75 10717.83 00:10:18.496 PCIE (0000:00:12.0) NSID 3 from core 2: 3290.58 12.85 4862.16 1174.61 11341.04 00:10:18.496 ======================================================== 00:10:18.496 Total : 19743.51 77.12 4862.57 1155.75 11444.01 00:10:18.496 00:10:18.496 09:35:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 80631 00:10:18.496 Initializing NVMe Controllers 00:10:18.496 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.496 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:18.496 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:18.496 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:18.496 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:18.496 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:18.496 Initialization complete. Launching workers. 00:10:18.496 ======================================================== 00:10:18.496 Latency(us) 00:10:18.496 Device Information : IOPS MiB/s Average min max 00:10:18.496 PCIE (0000:00:10.0) NSID 1 from core 1: 5016.99 19.60 3186.54 1088.87 6453.16 00:10:18.497 PCIE (0000:00:11.0) NSID 1 from core 1: 5016.99 19.60 3188.61 1104.89 6506.57 00:10:18.497 PCIE (0000:00:13.0) NSID 1 from core 1: 5016.99 19.60 3188.84 1105.51 6319.78 00:10:18.497 PCIE (0000:00:12.0) NSID 1 from core 1: 5016.99 19.60 3188.99 1112.72 5624.90 00:10:18.497 PCIE (0000:00:12.0) NSID 2 from core 1: 5016.99 19.60 3189.05 1127.00 5495.51 00:10:18.497 PCIE (0000:00:12.0) NSID 3 from core 1: 5016.99 19.60 3189.07 1117.69 5821.50 00:10:18.497 ======================================================== 00:10:18.497 Total : 30101.95 117.59 3188.52 1088.87 6506.57 00:10:18.497 00:10:19.875 Initializing NVMe Controllers 00:10:19.875 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.875 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:19.875 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:19.875 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:19.875 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:19.875 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:19.875 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:19.875 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:19.875 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:19.875 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:19.875 Initialization complete. Launching workers. 00:10:19.875 ======================================================== 00:10:19.875 Latency(us) 00:10:19.875 Device Information : IOPS MiB/s Average min max 00:10:19.875 PCIE (0000:00:10.0) NSID 1 from core 0: 8354.87 32.64 1913.48 922.07 10396.04 00:10:19.875 PCIE (0000:00:11.0) NSID 1 from core 0: 8354.87 32.64 1914.62 951.68 10385.66 00:10:19.875 PCIE (0000:00:13.0) NSID 1 from core 0: 8354.87 32.64 1914.59 887.63 10298.82 00:10:19.875 PCIE (0000:00:12.0) NSID 1 from core 0: 8354.87 32.64 1914.57 741.52 6696.67 00:10:19.875 PCIE (0000:00:12.0) NSID 2 from core 0: 8354.87 32.64 1914.54 587.60 6438.59 00:10:19.875 PCIE (0000:00:12.0) NSID 3 from core 0: 8354.87 32.64 1914.50 446.92 10451.58 00:10:19.875 ======================================================== 00:10:19.875 Total : 50129.21 195.82 1914.38 446.92 10451.58 00:10:19.875 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 80632 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=80701 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=80702 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:19.875 09:35:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:23.163 Initializing NVMe Controllers 00:10:23.163 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.163 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:23.163 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:23.163 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:23.163 Initialization complete. Launching workers. 00:10:23.163 ======================================================== 00:10:23.163 Latency(us) 00:10:23.163 Device Information : IOPS MiB/s Average min max 00:10:23.163 PCIE (0000:00:10.0) NSID 1 from core 0: 4759.72 18.59 3358.86 1009.73 8150.60 00:10:23.163 PCIE (0000:00:11.0) NSID 1 from core 0: 4759.72 18.59 3360.81 1031.35 8312.42 00:10:23.163 PCIE (0000:00:13.0) NSID 1 from core 0: 4759.72 18.59 3360.96 1052.10 8807.75 00:10:23.163 PCIE (0000:00:12.0) NSID 1 from core 0: 4759.72 18.59 3361.03 1039.24 8779.58 00:10:23.163 PCIE (0000:00:12.0) NSID 2 from core 0: 4759.72 18.59 3361.12 1073.26 7984.52 00:10:23.163 PCIE (0000:00:12.0) NSID 3 from core 0: 4765.05 18.61 3357.54 1077.62 7787.00 00:10:23.163 ======================================================== 00:10:23.163 Total : 28563.64 111.58 3360.05 1009.73 8807.75 00:10:23.163 00:10:23.163 Initializing NVMe Controllers 00:10:23.163 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.163 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.163 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:23.163 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:23.163 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:23.163 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:23.163 Initialization complete. Launching workers. 00:10:23.163 ======================================================== 00:10:23.163 Latency(us) 00:10:23.163 Device Information : IOPS MiB/s Average min max 00:10:23.163 PCIE (0000:00:10.0) NSID 1 from core 1: 4910.61 19.18 3255.59 1037.04 7300.38 00:10:23.163 PCIE (0000:00:11.0) NSID 1 from core 1: 4910.61 19.18 3257.65 1071.39 7188.94 00:10:23.164 PCIE (0000:00:13.0) NSID 1 from core 1: 4910.61 19.18 3257.84 1075.46 7900.92 00:10:23.164 PCIE (0000:00:12.0) NSID 1 from core 1: 4910.61 19.18 3258.11 1088.99 8248.27 00:10:23.164 PCIE (0000:00:12.0) NSID 2 from core 1: 4910.61 19.18 3258.15 1072.14 7872.72 00:10:23.164 PCIE (0000:00:12.0) NSID 3 from core 1: 4910.61 19.18 3258.24 1082.16 7338.07 00:10:23.164 ======================================================== 00:10:23.164 Total : 29463.65 115.09 3257.60 1037.04 8248.27 00:10:23.164 00:10:25.117 Initializing NVMe Controllers 00:10:25.117 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:25.117 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:25.117 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:25.117 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:25.117 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:25.117 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:25.117 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:25.117 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:25.117 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:25.117 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:25.117 Initialization complete. Launching workers. 00:10:25.117 ======================================================== 00:10:25.117 Latency(us) 00:10:25.117 Device Information : IOPS MiB/s Average min max 00:10:25.117 PCIE (0000:00:10.0) NSID 1 from core 2: 3361.60 13.13 4758.10 1096.28 18132.25 00:10:25.117 PCIE (0000:00:11.0) NSID 1 from core 2: 3361.60 13.13 4759.18 1109.31 13919.12 00:10:25.117 PCIE (0000:00:13.0) NSID 1 from core 2: 3361.60 13.13 4759.43 1127.90 13493.78 00:10:25.117 PCIE (0000:00:12.0) NSID 1 from core 2: 3361.60 13.13 4756.96 1119.39 12917.97 00:10:25.117 PCIE (0000:00:12.0) NSID 2 from core 2: 3361.60 13.13 4756.16 1121.63 13923.38 00:10:25.117 PCIE (0000:00:12.0) NSID 3 from core 2: 3364.80 13.14 4751.81 812.37 17652.14 00:10:25.117 ======================================================== 00:10:25.117 Total : 20172.83 78.80 4756.94 812.37 18132.25 00:10:25.117 00:10:25.117 09:36:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 80701 00:10:25.117 09:36:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 80702 00:10:25.117 00:10:25.117 real 0m10.580s 00:10:25.117 user 0m18.395s 00:10:25.117 sys 0m0.916s 00:10:25.117 09:36:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.117 ************************************ 00:10:25.117 END TEST nvme_multi_secondary 00:10:25.117 09:36:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 ************************************ 00:10:25.117 09:36:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:25.117 09:36:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/79646 ]] 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1090 -- # kill 79646 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1091 -- # wait 79646 00:10:25.117 [2024-07-24 09:36:02.715112] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.715288] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.715342] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.715392] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.716625] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.716715] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.716760] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.716809] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.717984] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.718082] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.718128] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.718184] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.719448] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.719546] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.719591] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 [2024-07-24 09:36:02.719641] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80574) is not found. Dropping the request. 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:10:25.117 09:36:02 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.117 09:36:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 ************************************ 00:10:25.117 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:25.117 ************************************ 00:10:25.117 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:25.376 * Looking for test storage... 00:10:25.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:25.376 09:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:25.376 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:25.376 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:25.376 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=80858 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 80858 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 80858 ']' 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.377 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:25.636 [2024-07-24 09:36:03.206445] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:10:25.636 [2024-07-24 09:36:03.206607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80858 ] 00:10:25.636 [2024-07-24 09:36:03.384673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.636 [2024-07-24 09:36:03.433555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.636 [2024-07-24 09:36:03.433715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.636 [2024-07-24 09:36:03.433849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.636 [2024-07-24 09:36:03.433716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.205 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.205 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:10:26.205 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:26.205 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.205 09:36:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.466 nvme0n1 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_tJX9u.txt 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.466 true 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721813764 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=80881 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:26.466 09:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:28.373 [2024-07-24 09:36:06.100936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:28.373 [2024-07-24 09:36:06.101267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:28.373 [2024-07-24 09:36:06.101297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:28.373 [2024-07-24 09:36:06.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.373 [2024-07-24 09:36:06.103228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.373 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 80881 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 80881 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 80881 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:28.373 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_tJX9u.txt 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_tJX9u.txt 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 80858 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 80858 ']' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 80858 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80858 00:10:28.633 killing process with pid 80858 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80858' 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 80858 00:10:28.633 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 80858 00:10:28.893 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:28.893 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:28.893 00:10:28.893 real 0m3.808s 00:10:28.893 user 0m12.999s 00:10:28.893 sys 0m0.730s 00:10:28.893 ************************************ 00:10:28.893 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:28.893 ************************************ 00:10:28.893 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.893 09:36:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:29.152 09:36:06 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:29.152 09:36:06 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:29.152 09:36:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:29.152 09:36:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.152 09:36:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.152 ************************************ 00:10:29.152 START TEST nvme_fio 00:10:29.152 ************************************ 00:10:29.152 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:29.152 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:29.153 09:36:06 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:29.153 09:36:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:29.412 09:36:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:29.412 09:36:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:29.672 09:36:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:29.672 09:36:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:29.672 09:36:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.932 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:29.932 fio-3.35 00:10:29.932 Starting 1 thread 00:10:34.125 00:10:34.125 test: (groupid=0, jobs=1): err= 0: pid=81005: Wed Jul 24 09:36:11 2024 00:10:34.125 read: IOPS=23.4k, BW=91.5MiB/s (95.9MB/s)(183MiB/2001msec) 00:10:34.125 slat (nsec): min=3795, max=84547, avg=4455.45, stdev=1250.69 00:10:34.125 clat (usec): min=395, max=14000, avg=2728.71, stdev=355.37 00:10:34.125 lat (usec): min=399, max=14085, avg=2733.17, stdev=355.90 00:10:34.125 clat percentiles (usec): 00:10:34.125 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:34.125 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:10:34.125 | 70.00th=[ 2769], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:34.125 | 99.00th=[ 3261], 99.50th=[ 3884], 99.90th=[ 7701], 99.95th=[10028], 00:10:34.125 | 99.99th=[13566] 00:10:34.125 bw ( KiB/s): min=89744, max=94640, per=99.09%, avg=92829.33, stdev=2685.38, samples=3 00:10:34.125 iops : min=22436, max=23660, avg=23207.33, stdev=671.35, samples=3 00:10:34.125 write: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(182MiB/2001msec); 0 zone resets 00:10:34.125 slat (nsec): min=3865, max=30637, avg=4614.79, stdev=1103.83 00:10:34.125 clat (usec): min=179, max=13694, avg=2735.39, stdev=367.43 00:10:34.125 lat (usec): min=184, max=13725, avg=2740.00, stdev=367.90 00:10:34.125 clat percentiles (usec): 00:10:34.125 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:34.125 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:10:34.125 | 70.00th=[ 2769], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:34.125 | 99.00th=[ 3326], 99.50th=[ 4621], 99.90th=[ 7832], 99.95th=[10552], 00:10:34.125 | 99.99th=[13173] 00:10:34.125 bw ( KiB/s): min=89200, max=96360, per=99.83%, avg=92936.00, stdev=3590.18, samples=3 00:10:34.125 iops : min=22300, max=24090, avg=23234.00, stdev=897.55, samples=3 00:10:34.125 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:34.125 lat (msec) : 2=0.05%, 4=99.39%, 10=0.46%, 20=0.06% 00:10:34.125 cpu : usr=99.25%, sys=0.25%, ctx=9, majf=0, minf=627 00:10:34.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:34.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.126 issued rwts: total=46862,46570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.126 00:10:34.126 Run status group 0 (all jobs): 00:10:34.126 READ: bw=91.5MiB/s (95.9MB/s), 91.5MiB/s-91.5MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:34.126 WRITE: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=182MiB (191MB), run=2001-2001msec 00:10:34.126 ----------------------------------------------------- 00:10:34.126 Suppressions used: 00:10:34.126 count bytes template 00:10:34.126 1 32 /usr/src/fio/parse.c 00:10:34.126 1 8 libtcmalloc_minimal.so 00:10:34.126 ----------------------------------------------------- 00:10:34.126 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:34.126 09:36:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:34.387 09:36:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:34.387 09:36:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:34.387 09:36:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:34.387 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:34.387 fio-3.35 00:10:34.387 Starting 1 thread 00:10:38.637 00:10:38.637 test: (groupid=0, jobs=1): err= 0: pid=81071: Wed Jul 24 09:36:16 2024 00:10:38.637 read: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(185MiB/2001msec) 00:10:38.637 slat (nsec): min=3789, max=61058, avg=4371.41, stdev=1102.38 00:10:38.637 clat (usec): min=272, max=14814, avg=2703.61, stdev=388.50 00:10:38.637 lat (usec): min=277, max=14875, avg=2707.98, stdev=388.96 00:10:38.637 clat percentiles (usec): 00:10:38.637 | 1.00th=[ 2343], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:38.637 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:38.637 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2868], 95.00th=[ 2933], 00:10:38.637 | 99.00th=[ 3556], 99.50th=[ 4686], 99.90th=[ 7701], 99.95th=[11207], 00:10:38.637 | 99.99th=[14484] 00:10:38.637 bw ( KiB/s): min=91688, max=95056, per=98.98%, avg=93576.00, stdev=1720.67, samples=3 00:10:38.637 iops : min=22922, max=23764, avg=23394.00, stdev=430.17, samples=3 00:10:38.637 write: IOPS=23.5k, BW=91.7MiB/s (96.2MB/s)(184MiB/2001msec); 0 zone resets 00:10:38.637 slat (nsec): min=3871, max=96680, avg=4543.68, stdev=1236.50 00:10:38.637 clat (usec): min=337, max=14560, avg=2711.91, stdev=400.71 00:10:38.637 lat (usec): min=342, max=14582, avg=2716.46, stdev=401.16 00:10:38.637 clat percentiles (usec): 00:10:38.637 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:38.637 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:38.637 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:10:38.637 | 99.00th=[ 3589], 99.50th=[ 4752], 99.90th=[ 8848], 99.95th=[11600], 00:10:38.637 | 99.99th=[14091] 00:10:38.637 bw ( KiB/s): min=91352, max=95592, per=99.72%, avg=93661.33, stdev=2145.21, samples=3 00:10:38.637 iops : min=22838, max=23898, avg=23415.33, stdev=536.30, samples=3 00:10:38.637 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:38.637 lat (msec) : 2=0.45%, 4=98.81%, 10=0.63%, 20=0.07% 00:10:38.637 cpu : usr=99.30%, sys=0.15%, ctx=5, majf=0, minf=627 00:10:38.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:38.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.638 issued rwts: total=47294,46984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.638 00:10:38.638 Run status group 0 (all jobs): 00:10:38.638 READ: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:10:38.638 WRITE: bw=91.7MiB/s (96.2MB/s), 91.7MiB/s-91.7MiB/s (96.2MB/s-96.2MB/s), io=184MiB (192MB), run=2001-2001msec 00:10:38.638 ----------------------------------------------------- 00:10:38.638 Suppressions used: 00:10:38.638 count bytes template 00:10:38.638 1 32 /usr/src/fio/parse.c 00:10:38.638 1 8 libtcmalloc_minimal.so 00:10:38.638 ----------------------------------------------------- 00:10:38.638 00:10:38.638 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.638 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:38.638 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:38.638 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:38.896 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:38.896 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:39.155 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:39.155 09:36:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:39.155 09:36:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:39.155 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:39.155 fio-3.35 00:10:39.155 Starting 1 thread 00:10:43.342 00:10:43.342 test: (groupid=0, jobs=1): err= 0: pid=81134: Wed Jul 24 09:36:20 2024 00:10:43.342 read: IOPS=24.0k, BW=93.6MiB/s (98.1MB/s)(187MiB/2001msec) 00:10:43.342 slat (usec): min=3, max=201, avg= 4.30, stdev= 1.42 00:10:43.342 clat (usec): min=186, max=12929, avg=2668.60, stdev=336.76 00:10:43.342 lat (usec): min=190, max=12979, avg=2672.90, stdev=337.21 00:10:43.342 clat percentiles (usec): 00:10:43.342 | 1.00th=[ 2343], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:43.342 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:43.342 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2900], 00:10:43.342 | 99.00th=[ 3392], 99.50th=[ 4359], 99.90th=[ 6849], 99.95th=[ 9503], 00:10:43.342 | 99.99th=[12649] 00:10:43.342 bw ( KiB/s): min=92256, max=97000, per=99.39%, avg=95226.00, stdev=2588.28, samples=3 00:10:43.342 iops : min=23064, max=24250, avg=23806.33, stdev=646.95, samples=3 00:10:43.342 write: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec); 0 zone resets 00:10:43.342 slat (usec): min=3, max=195, avg= 4.47, stdev= 1.77 00:10:43.342 clat (usec): min=219, max=12731, avg=2674.28, stdev=343.72 00:10:43.342 lat (usec): min=223, max=12753, avg=2678.76, stdev=344.17 00:10:43.342 clat percentiles (usec): 00:10:43.342 | 1.00th=[ 2343], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:43.342 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:43.342 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2900], 00:10:43.342 | 99.00th=[ 3392], 99.50th=[ 4490], 99.90th=[ 7177], 99.95th=[ 9896], 00:10:43.342 | 99.99th=[12256] 00:10:43.342 bw ( KiB/s): min=92104, max=97402, per=100.00%, avg=95227.33, stdev=2773.48, samples=3 00:10:43.342 iops : min=23026, max=24350, avg=23806.67, stdev=693.17, samples=3 00:10:43.342 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:43.342 lat (msec) : 2=0.08%, 4=99.32%, 10=0.51%, 20=0.05% 00:10:43.342 cpu : usr=98.85%, sys=0.35%, ctx=17, majf=0, minf=626 00:10:43.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.342 issued rwts: total=47929,47625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.342 00:10:43.342 Run status group 0 (all jobs): 00:10:43.342 READ: bw=93.6MiB/s (98.1MB/s), 93.6MiB/s-93.6MiB/s (98.1MB/s-98.1MB/s), io=187MiB (196MB), run=2001-2001msec 00:10:43.342 WRITE: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:10:43.342 ----------------------------------------------------- 00:10:43.342 Suppressions used: 00:10:43.342 count bytes template 00:10:43.342 1 32 /usr/src/fio/parse.c 00:10:43.342 1 8 libtcmalloc_minimal.so 00:10:43.342 ----------------------------------------------------- 00:10:43.342 00:10:43.342 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:43.342 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:43.342 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:43.342 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:43.601 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:43.601 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:43.859 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:43.859 09:36:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:43.859 09:36:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:44.119 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:44.119 fio-3.35 00:10:44.119 Starting 1 thread 00:10:48.310 00:10:48.310 test: (groupid=0, jobs=1): err= 0: pid=81199: Wed Jul 24 09:36:25 2024 00:10:48.310 read: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(182MiB/2001msec) 00:10:48.310 slat (nsec): min=3813, max=64482, avg=4501.25, stdev=1243.35 00:10:48.310 clat (usec): min=189, max=14074, avg=2745.08, stdev=473.51 00:10:48.310 lat (usec): min=193, max=14138, avg=2749.58, stdev=474.09 00:10:48.310 clat percentiles (usec): 00:10:48.310 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:48.310 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:10:48.310 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:10:48.310 | 99.00th=[ 4555], 99.50th=[ 5866], 99.90th=[ 8848], 99.95th=[10814], 00:10:48.310 | 99.99th=[13829] 00:10:48.310 bw ( KiB/s): min=91848, max=95680, per=100.00%, avg=93498.67, stdev=1970.35, samples=3 00:10:48.310 iops : min=22962, max=23920, avg=23374.67, stdev=492.59, samples=3 00:10:48.310 write: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(181MiB/2001msec); 0 zone resets 00:10:48.310 slat (nsec): min=3909, max=40060, avg=4671.86, stdev=1223.42 00:10:48.310 clat (usec): min=281, max=13904, avg=2749.72, stdev=480.54 00:10:48.310 lat (usec): min=286, max=13926, avg=2754.39, stdev=481.12 00:10:48.310 clat percentiles (usec): 00:10:48.310 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:48.310 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:10:48.310 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:10:48.310 | 99.00th=[ 4490], 99.50th=[ 5866], 99.90th=[ 8979], 99.95th=[11076], 00:10:48.310 | 99.99th=[13435] 00:10:48.310 bw ( KiB/s): min=91752, max=94560, per=100.00%, avg=93586.67, stdev=1589.85, samples=3 00:10:48.310 iops : min=22938, max=23640, avg=23396.67, stdev=397.46, samples=3 00:10:48.310 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:48.310 lat (msec) : 2=0.38%, 4=98.32%, 10=1.18%, 20=0.07% 00:10:48.310 cpu : usr=99.30%, sys=0.15%, ctx=3, majf=0, minf=625 00:10:48.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:48.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.310 issued rwts: total=46615,46298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.310 00:10:48.310 Run status group 0 (all jobs): 00:10:48.310 READ: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=182MiB (191MB), run=2001-2001msec 00:10:48.310 WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:10:48.310 ----------------------------------------------------- 00:10:48.310 Suppressions used: 00:10:48.310 count bytes template 00:10:48.310 1 32 /usr/src/fio/parse.c 00:10:48.310 1 8 libtcmalloc_minimal.so 00:10:48.310 ----------------------------------------------------- 00:10:48.310 00:10:48.310 09:36:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:48.310 09:36:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:48.310 00:10:48.310 real 0m18.711s 00:10:48.310 user 0m14.438s 00:10:48.310 sys 0m4.273s 00:10:48.310 ************************************ 00:10:48.310 END TEST nvme_fio 00:10:48.310 ************************************ 00:10:48.310 09:36:25 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.310 09:36:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:48.310 ************************************ 00:10:48.310 END TEST nvme 00:10:48.310 ************************************ 00:10:48.310 00:10:48.310 real 1m30.067s 00:10:48.310 user 3m31.799s 00:10:48.310 sys 0m21.990s 00:10:48.310 09:36:25 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.310 09:36:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.310 09:36:25 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:10:48.310 09:36:25 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:48.310 09:36:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.310 09:36:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.310 09:36:25 -- common/autotest_common.sh@10 -- # set +x 00:10:48.310 ************************************ 00:10:48.310 START TEST nvme_scc 00:10:48.310 ************************************ 00:10:48.310 09:36:25 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:48.310 * Looking for test storage... 00:10:48.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.310 09:36:25 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:48.310 09:36:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:48.310 09:36:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:48.310 09:36:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:48.310 09:36:25 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.310 09:36:25 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.310 09:36:25 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.310 09:36:25 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.310 09:36:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.311 09:36:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.311 09:36:25 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.311 09:36:25 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:48.311 09:36:25 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:48.311 09:36:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:48.311 09:36:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.311 09:36:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:48.311 09:36:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:48.311 09:36:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:48.311 09:36:25 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:48.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.828 Waiting for block devices as requested 00:10:48.828 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.087 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.345 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.623 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:54.623 09:36:32 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:54.623 09:36:32 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:54.623 09:36:32 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:54.623 09:36:32 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:54.623 09:36:32 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.623 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.624 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:54.625 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.626 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.627 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:54.628 09:36:32 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:54.628 09:36:32 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:54.628 09:36:32 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:54.628 09:36:32 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:54.628 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.629 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.630 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.631 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:54.632 09:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:54.632 09:36:32 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:54.632 09:36:32 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:54.633 09:36:32 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:54.633 09:36:32 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.633 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:54.634 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:54.635 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.636 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:54.637 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:54.638 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:54.639 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:54.640 09:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:54.641 09:36:32 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:54.641 09:36:32 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:54.641 09:36:32 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:54.641 09:36:32 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:54.641 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.642 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.643 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:54.644 09:36:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:54.644 09:36:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:54.644 09:36:32 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:55.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.191 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.191 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.191 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.191 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.191 09:36:33 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:56.191 09:36:33 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:56.191 09:36:33 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.191 09:36:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 ************************************ 00:10:56.191 START TEST nvme_simple_copy 00:10:56.191 ************************************ 00:10:56.191 09:36:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:56.450 Initializing NVMe Controllers 00:10:56.450 Attaching to 0000:00:10.0 00:10:56.450 Controller supports SCC. Attached to 0000:00:10.0 00:10:56.450 Namespace ID: 1 size: 6GB 00:10:56.450 Initialization complete. 00:10:56.450 00:10:56.450 Controller QEMU NVMe Ctrl (12340 ) 00:10:56.450 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:56.450 Namespace Block Size:4096 00:10:56.450 Writing LBAs 0 to 63 with Random Data 00:10:56.450 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:56.450 LBAs matching Written Data: 64 00:10:56.450 00:10:56.450 real 0m0.287s 00:10:56.450 user 0m0.101s 00:10:56.450 sys 0m0.084s 00:10:56.450 09:36:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.450 ************************************ 00:10:56.450 END TEST nvme_simple_copy 00:10:56.450 ************************************ 00:10:56.450 09:36:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:56.708 ************************************ 00:10:56.708 END TEST nvme_scc 00:10:56.708 ************************************ 00:10:56.708 00:10:56.708 real 0m8.733s 00:10:56.708 user 0m1.362s 00:10:56.708 sys 0m2.309s 00:10:56.708 09:36:34 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.708 09:36:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:56.708 09:36:34 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:10:56.708 09:36:34 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:10:56.708 09:36:34 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:10:56.708 09:36:34 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:10:56.708 09:36:34 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:56.708 09:36:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:56.708 09:36:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.708 09:36:34 -- common/autotest_common.sh@10 -- # set +x 00:10:56.708 ************************************ 00:10:56.708 START TEST nvme_fdp 00:10:56.708 ************************************ 00:10:56.708 09:36:34 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:56.708 * Looking for test storage... 00:10:56.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:56.708 09:36:34 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.708 09:36:34 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.708 09:36:34 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.708 09:36:34 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.708 09:36:34 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.708 09:36:34 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.708 09:36:34 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.708 09:36:34 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:56.708 09:36:34 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:56.708 09:36:34 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:56.708 09:36:34 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.708 09:36:34 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:57.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.535 Waiting for block devices as requested 00:10:57.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.793 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.793 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.051 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.332 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:03.332 09:36:40 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:03.332 09:36:40 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:03.332 09:36:40 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:03.332 09:36:40 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.332 09:36:40 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.332 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:03.333 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:03.334 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:03.335 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.336 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:03.337 09:36:40 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:03.337 09:36:40 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:03.337 09:36:40 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.337 09:36:40 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:03.337 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.338 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.339 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:03.340 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:03.341 09:36:40 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:03.341 09:36:40 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:03.341 09:36:40 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.341 09:36:40 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.341 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.342 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.343 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.344 09:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.345 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.345 09:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.345 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.345 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:03.346 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.347 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.348 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.349 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:03.350 09:36:41 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:03.350 09:36:41 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:03.350 09:36:41 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.350 09:36:41 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.350 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.351 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.352 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:03.353 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:03.612 09:36:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:03.612 09:36:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:03.612 09:36:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.746 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.006 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.006 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.006 09:36:42 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:05.006 09:36:42 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.006 09:36:42 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.006 09:36:42 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:05.006 ************************************ 00:11:05.006 START TEST nvme_flexible_data_placement 00:11:05.006 ************************************ 00:11:05.006 09:36:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:05.265 Initializing NVMe Controllers 00:11:05.265 Attaching to 0000:00:13.0 00:11:05.265 Controller supports FDP Attached to 0000:00:13.0 00:11:05.265 Namespace ID: 1 Endurance Group ID: 1 00:11:05.265 Initialization complete. 00:11:05.265 00:11:05.265 ================================== 00:11:05.265 == FDP tests for Namespace: #01 == 00:11:05.265 ================================== 00:11:05.265 00:11:05.265 Get Feature: FDP: 00:11:05.265 ================= 00:11:05.265 Enabled: Yes 00:11:05.265 FDP configuration Index: 0 00:11:05.265 00:11:05.265 FDP configurations log page 00:11:05.265 =========================== 00:11:05.265 Number of FDP configurations: 1 00:11:05.265 Version: 0 00:11:05.265 Size: 112 00:11:05.265 FDP Configuration Descriptor: 0 00:11:05.265 Descriptor Size: 96 00:11:05.265 Reclaim Group Identifier format: 2 00:11:05.265 FDP Volatile Write Cache: Not Present 00:11:05.265 FDP Configuration: Valid 00:11:05.265 Vendor Specific Size: 0 00:11:05.265 Number of Reclaim Groups: 2 00:11:05.265 Number of Recalim Unit Handles: 8 00:11:05.265 Max Placement Identifiers: 128 00:11:05.265 Number of Namespaces Suppprted: 256 00:11:05.265 Reclaim unit Nominal Size: 6000000 bytes 00:11:05.265 Estimated Reclaim Unit Time Limit: Not Reported 00:11:05.265 RUH Desc #000: RUH Type: Initially Isolated 00:11:05.265 RUH Desc #001: RUH Type: Initially Isolated 00:11:05.265 RUH Desc #002: RUH Type: Initially Isolated 00:11:05.266 RUH Desc #003: RUH Type: Initially Isolated 00:11:05.266 RUH Desc #004: RUH Type: Initially Isolated 00:11:05.266 RUH Desc #005: RUH Type: Initially Isolated 00:11:05.266 RUH Desc #006: RUH Type: Initially Isolated 00:11:05.266 RUH Desc #007: RUH Type: Initially Isolated 00:11:05.266 00:11:05.266 FDP reclaim unit handle usage log page 00:11:05.266 ====================================== 00:11:05.266 Number of Reclaim Unit Handles: 8 00:11:05.266 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:05.266 RUH Usage Desc #001: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #002: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #003: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #004: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #005: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #006: RUH Attributes: Unused 00:11:05.266 RUH Usage Desc #007: RUH Attributes: Unused 00:11:05.266 00:11:05.266 FDP statistics log page 00:11:05.266 ======================= 00:11:05.266 Host bytes with metadata written: 1720995840 00:11:05.266 Media bytes with metadata written: 1721847808 00:11:05.266 Media bytes erased: 0 00:11:05.266 00:11:05.266 FDP Reclaim unit handle status 00:11:05.266 ============================== 00:11:05.266 Number of RUHS descriptors: 2 00:11:05.266 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000056bb 00:11:05.266 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:05.266 00:11:05.266 FDP write on placement id: 0 success 00:11:05.266 00:11:05.266 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:05.266 00:11:05.266 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:05.266 00:11:05.266 Get Feature: FDP Events for Placement handle: #0 00:11:05.266 ======================== 00:11:05.266 Number of FDP Events: 6 00:11:05.266 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:05.266 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:05.266 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:05.266 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:05.266 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:05.266 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:05.266 00:11:05.266 FDP events log page 00:11:05.266 =================== 00:11:05.266 Number of FDP events: 1 00:11:05.266 FDP Event #0: 00:11:05.266 Event Type: RU Not Written to Capacity 00:11:05.266 Placement Identifier: Valid 00:11:05.266 NSID: Valid 00:11:05.266 Location: Valid 00:11:05.266 Placement Identifier: 0 00:11:05.266 Event Timestamp: 3 00:11:05.266 Namespace Identifier: 1 00:11:05.266 Reclaim Group Identifier: 0 00:11:05.266 Reclaim Unit Handle Identifier: 0 00:11:05.266 00:11:05.266 FDP test passed 00:11:05.266 00:11:05.266 real 0m0.251s 00:11:05.266 user 0m0.065s 00:11:05.266 sys 0m0.085s 00:11:05.266 ************************************ 00:11:05.266 END TEST nvme_flexible_data_placement 00:11:05.266 ************************************ 00:11:05.266 09:36:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.266 09:36:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 ************************************ 00:11:05.526 END TEST nvme_fdp 00:11:05.526 ************************************ 00:11:05.526 00:11:05.526 real 0m8.706s 00:11:05.526 user 0m1.377s 00:11:05.526 sys 0m2.410s 00:11:05.526 09:36:43 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.526 09:36:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 09:36:43 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:11:05.526 09:36:43 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.526 09:36:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:05.526 09:36:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.526 09:36:43 -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 ************************************ 00:11:05.526 START TEST nvme_rpc 00:11:05.526 ************************************ 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.526 * Looking for test storage... 00:11:05.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.526 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.526 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:05.526 09:36:43 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:05.785 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:05.785 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=82553 00:11:05.785 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:05.785 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:05.785 09:36:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 82553 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 82553 ']' 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.785 09:36:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.785 [2024-07-24 09:36:43.502430] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:11:05.785 [2024-07-24 09:36:43.502747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82553 ] 00:11:06.043 [2024-07-24 09:36:43.672174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:06.043 [2024-07-24 09:36:43.724652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.043 [2024-07-24 09:36:43.724747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.611 09:36:44 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.611 09:36:44 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:06.611 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:06.870 Nvme0n1 00:11:06.870 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:06.870 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:07.129 request: 00:11:07.129 { 00:11:07.129 "bdev_name": "Nvme0n1", 00:11:07.129 "filename": "non_existing_file", 00:11:07.129 "method": "bdev_nvme_apply_firmware", 00:11:07.129 "req_id": 1 00:11:07.129 } 00:11:07.129 Got JSON-RPC error response 00:11:07.129 response: 00:11:07.129 { 00:11:07.129 "code": -32603, 00:11:07.129 "message": "open file failed." 00:11:07.129 } 00:11:07.129 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:07.129 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:07.129 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:07.129 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:07.129 09:36:44 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 82553 00:11:07.129 09:36:44 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 82553 ']' 00:11:07.129 09:36:44 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 82553 00:11:07.129 09:36:44 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:11:07.129 09:36:44 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.129 09:36:44 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82553 00:11:07.386 killing process with pid 82553 00:11:07.386 09:36:44 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.387 09:36:44 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.387 09:36:44 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82553' 00:11:07.387 09:36:44 nvme_rpc -- common/autotest_common.sh@969 -- # kill 82553 00:11:07.387 09:36:44 nvme_rpc -- common/autotest_common.sh@974 -- # wait 82553 00:11:07.645 ************************************ 00:11:07.645 END TEST nvme_rpc 00:11:07.645 ************************************ 00:11:07.645 00:11:07.645 real 0m2.183s 00:11:07.645 user 0m3.862s 00:11:07.645 sys 0m0.692s 00:11:07.645 09:36:45 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.645 09:36:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.645 09:36:45 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:07.645 09:36:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:07.645 09:36:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.645 09:36:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.645 ************************************ 00:11:07.645 START TEST nvme_rpc_timeouts 00:11:07.645 ************************************ 00:11:07.645 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:07.904 * Looking for test storage... 00:11:07.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_82607 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_82607 00:11:07.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=82631 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:07.904 09:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 82631 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 82631 ']' 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.904 09:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:07.904 [2024-07-24 09:36:45.615099] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:11:07.904 [2024-07-24 09:36:45.615258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82631 ] 00:11:08.164 [2024-07-24 09:36:45.783481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.164 [2024-07-24 09:36:45.829356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.164 [2024-07-24 09:36:45.829447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.732 Checking default timeout settings: 00:11:08.732 09:36:46 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.732 09:36:46 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:11:08.732 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:08.732 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:08.991 Making settings changes with rpc: 00:11:08.991 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:08.991 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:09.250 Check default vs. modified settings: 00:11:09.250 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:09.250 09:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:09.508 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 Setting action_on_timeout is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 Setting timeout_us is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:09.509 Setting timeout_admin_us is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_82607 /tmp/settings_modified_82607 00:11:09.509 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 82631 00:11:09.509 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 82631 ']' 00:11:09.509 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 82631 00:11:09.509 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82631 00:11:09.768 killing process with pid 82631 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82631' 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 82631 00:11:09.768 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 82631 00:11:10.028 RPC TIMEOUT SETTING TEST PASSED. 00:11:10.028 09:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:10.028 ************************************ 00:11:10.028 END TEST nvme_rpc_timeouts 00:11:10.028 ************************************ 00:11:10.028 00:11:10.028 real 0m2.349s 00:11:10.028 user 0m4.453s 00:11:10.028 sys 0m0.713s 00:11:10.028 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.028 09:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:10.028 09:36:47 -- spdk/autotest.sh@247 -- # uname -s 00:11:10.028 09:36:47 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:11:10.029 09:36:47 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:10.029 09:36:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:10.029 09:36:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.029 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 ************************************ 00:11:10.029 START TEST sw_hotplug 00:11:10.029 ************************************ 00:11:10.029 09:36:47 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:10.288 * Looking for test storage... 00:11:10.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:10.288 09:36:47 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:10.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.115 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.115 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.115 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.115 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.115 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:11.115 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:11.115 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:11.115 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:11.115 09:36:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:11.116 09:36:48 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:11.116 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:11.116 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:11.116 09:36:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.943 Waiting for block devices as requested 00:11:11.943 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.202 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.202 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.502 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:17.502 09:36:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:17.502 09:36:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.070 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:18.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.070 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:18.329 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:18.588 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.588 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=83483 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:18.847 09:36:56 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:18.847 09:36:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:19.107 Initializing NVMe Controllers 00:11:19.107 Attaching to 0000:00:10.0 00:11:19.107 Attaching to 0000:00:11.0 00:11:19.107 Attached to 0000:00:10.0 00:11:19.107 Attached to 0000:00:11.0 00:11:19.107 Initialization complete. Starting I/O... 00:11:19.107 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:19.107 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:19.107 00:11:20.043 QEMU NVMe Ctrl (12340 ): 1660 I/Os completed (+1660) 00:11:20.043 QEMU NVMe Ctrl (12341 ): 1660 I/Os completed (+1660) 00:11:20.043 00:11:21.420 QEMU NVMe Ctrl (12340 ): 3796 I/Os completed (+2136) 00:11:21.420 QEMU NVMe Ctrl (12341 ): 3796 I/Os completed (+2136) 00:11:21.420 00:11:22.354 QEMU NVMe Ctrl (12340 ): 6228 I/Os completed (+2432) 00:11:22.354 QEMU NVMe Ctrl (12341 ): 6228 I/Os completed (+2432) 00:11:22.354 00:11:23.288 QEMU NVMe Ctrl (12340 ): 8648 I/Os completed (+2420) 00:11:23.288 QEMU NVMe Ctrl (12341 ): 8648 I/Os completed (+2420) 00:11:23.288 00:11:24.221 QEMU NVMe Ctrl (12340 ): 11124 I/Os completed (+2476) 00:11:24.221 QEMU NVMe Ctrl (12341 ): 11124 I/Os completed (+2476) 00:11:24.221 00:11:25.156 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.156 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.156 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.156 [2024-07-24 09:37:02.637710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:25.156 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:25.156 [2024-07-24 09:37:02.639204] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.156 [2024-07-24 09:37:02.639253] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.639273] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.639291] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:25.157 [2024-07-24 09:37:02.641203] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.641238] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.641254] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.641273] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.157 [2024-07-24 09:37:02.679882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:25.157 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:25.157 [2024-07-24 09:37:02.681248] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.681286] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.681307] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.681322] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:25.157 [2024-07-24 09:37:02.682836] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.682865] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.682886] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 [2024-07-24 09:37:02.682901] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.157 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:25.157 EAL: Scan for (pci) bus failed. 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:25.157 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.157 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:25.157 Attaching to 0000:00:10.0 00:11:25.157 Attached to 0000:00:10.0 00:11:25.415 09:37:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:25.415 09:37:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.415 09:37:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:25.415 Attaching to 0000:00:11.0 00:11:25.415 Attached to 0000:00:11.0 00:11:26.350 QEMU NVMe Ctrl (12340 ): 2240 I/Os completed (+2240) 00:11:26.350 QEMU NVMe Ctrl (12341 ): 1976 I/Os completed (+1976) 00:11:26.350 00:11:27.286 QEMU NVMe Ctrl (12340 ): 4620 I/Os completed (+2380) 00:11:27.286 QEMU NVMe Ctrl (12341 ): 4356 I/Os completed (+2380) 00:11:27.286 00:11:28.222 QEMU NVMe Ctrl (12340 ): 7032 I/Os completed (+2412) 00:11:28.222 QEMU NVMe Ctrl (12341 ): 6768 I/Os completed (+2412) 00:11:28.222 00:11:29.159 QEMU NVMe Ctrl (12340 ): 9448 I/Os completed (+2416) 00:11:29.159 QEMU NVMe Ctrl (12341 ): 9186 I/Os completed (+2418) 00:11:29.159 00:11:30.140 QEMU NVMe Ctrl (12340 ): 11816 I/Os completed (+2368) 00:11:30.140 QEMU NVMe Ctrl (12341 ): 11554 I/Os completed (+2368) 00:11:30.140 00:11:31.094 QEMU NVMe Ctrl (12340 ): 14252 I/Os completed (+2436) 00:11:31.094 QEMU NVMe Ctrl (12341 ): 13990 I/Os completed (+2436) 00:11:31.094 00:11:32.030 QEMU NVMe Ctrl (12340 ): 16724 I/Os completed (+2472) 00:11:32.030 QEMU NVMe Ctrl (12341 ): 16462 I/Os completed (+2472) 00:11:32.030 00:11:33.407 QEMU NVMe Ctrl (12340 ): 19192 I/Os completed (+2468) 00:11:33.407 QEMU NVMe Ctrl (12341 ): 18930 I/Os completed (+2468) 00:11:33.407 00:11:34.344 QEMU NVMe Ctrl (12340 ): 21672 I/Os completed (+2480) 00:11:34.345 QEMU NVMe Ctrl (12341 ): 21410 I/Os completed (+2480) 00:11:34.345 00:11:35.282 QEMU NVMe Ctrl (12340 ): 24160 I/Os completed (+2488) 00:11:35.282 QEMU NVMe Ctrl (12341 ): 23898 I/Os completed (+2488) 00:11:35.282 00:11:36.217 QEMU NVMe Ctrl (12340 ): 26608 I/Os completed (+2448) 00:11:36.217 QEMU NVMe Ctrl (12341 ): 26348 I/Os completed (+2450) 00:11:36.217 00:11:37.153 QEMU NVMe Ctrl (12340 ): 29012 I/Os completed (+2404) 00:11:37.153 QEMU NVMe Ctrl (12341 ): 28752 I/Os completed (+2404) 00:11:37.153 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.413 [2024-07-24 09:37:15.020877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:37.413 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:37.413 [2024-07-24 09:37:15.022456] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.022613] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.022662] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.022754] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:37.413 [2024-07-24 09:37:15.024675] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.024791] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.024838] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.024922] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.413 [2024-07-24 09:37:15.059999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:37.413 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:37.413 [2024-07-24 09:37:15.061541] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.061620] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.061665] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.061748] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:37.413 [2024-07-24 09:37:15.063363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.063464] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.063514] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 [2024-07-24 09:37:15.063598] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.413 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:37.413 EAL: Scan for (pci) bus failed. 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.413 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.673 Attaching to 0000:00:10.0 00:11:37.673 Attached to 0000:00:10.0 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.673 09:37:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.673 Attaching to 0000:00:11.0 00:11:37.673 Attached to 0000:00:11.0 00:11:38.241 QEMU NVMe Ctrl (12340 ): 1296 I/Os completed (+1296) 00:11:38.241 QEMU NVMe Ctrl (12341 ): 1028 I/Os completed (+1028) 00:11:38.241 00:11:39.221 QEMU NVMe Ctrl (12340 ): 3728 I/Os completed (+2432) 00:11:39.221 QEMU NVMe Ctrl (12341 ): 3460 I/Os completed (+2432) 00:11:39.221 00:11:40.156 QEMU NVMe Ctrl (12340 ): 6172 I/Os completed (+2444) 00:11:40.156 QEMU NVMe Ctrl (12341 ): 5904 I/Os completed (+2444) 00:11:40.156 00:11:41.091 QEMU NVMe Ctrl (12340 ): 8620 I/Os completed (+2448) 00:11:41.091 QEMU NVMe Ctrl (12341 ): 8352 I/Os completed (+2448) 00:11:41.091 00:11:42.023 QEMU NVMe Ctrl (12340 ): 11064 I/Os completed (+2444) 00:11:42.023 QEMU NVMe Ctrl (12341 ): 10796 I/Os completed (+2444) 00:11:42.023 00:11:43.402 QEMU NVMe Ctrl (12340 ): 13520 I/Os completed (+2456) 00:11:43.402 QEMU NVMe Ctrl (12341 ): 13252 I/Os completed (+2456) 00:11:43.402 00:11:44.338 QEMU NVMe Ctrl (12340 ): 15963 I/Os completed (+2443) 00:11:44.338 QEMU NVMe Ctrl (12341 ): 15694 I/Os completed (+2442) 00:11:44.338 00:11:45.273 QEMU NVMe Ctrl (12340 ): 18415 I/Os completed (+2452) 00:11:45.273 QEMU NVMe Ctrl (12341 ): 18150 I/Os completed (+2456) 00:11:45.273 00:11:46.207 QEMU NVMe Ctrl (12340 ): 20895 I/Os completed (+2480) 00:11:46.207 QEMU NVMe Ctrl (12341 ): 20632 I/Os completed (+2482) 00:11:46.207 00:11:47.140 QEMU NVMe Ctrl (12340 ): 23331 I/Os completed (+2436) 00:11:47.140 QEMU NVMe Ctrl (12341 ): 23068 I/Os completed (+2436) 00:11:47.140 00:11:48.077 QEMU NVMe Ctrl (12340 ): 25803 I/Os completed (+2472) 00:11:48.077 QEMU NVMe Ctrl (12341 ): 25542 I/Os completed (+2474) 00:11:48.077 00:11:49.012 QEMU NVMe Ctrl (12340 ): 28251 I/Os completed (+2448) 00:11:49.012 QEMU NVMe Ctrl (12341 ): 27990 I/Os completed (+2448) 00:11:49.012 00:11:49.578 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.837 [2024-07-24 09:37:27.400566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:49.837 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:49.837 [2024-07-24 09:37:27.402472] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.402561] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.402604] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.402629] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:49.837 [2024-07-24 09:37:27.404584] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.404627] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.404645] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.404662] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.837 [2024-07-24 09:37:27.439332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:49.837 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:49.837 [2024-07-24 09:37:27.440681] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.440722] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.440741] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.440757] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:49.837 [2024-07-24 09:37:27.442268] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.442305] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.442323] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 [2024-07-24 09:37:27.442339] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.837 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.096 Attaching to 0000:00:10.0 00:11:50.096 Attached to 0000:00:10.0 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.096 09:37:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.096 Attaching to 0000:00:11.0 00:11:50.096 Attached to 0000:00:11.0 00:11:50.096 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:50.096 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:50.096 [2024-07-24 09:37:27.779407] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:02.347 09:37:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:02.347 09:37:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.347 09:37:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.14 00:12:02.347 09:37:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.14 00:12:02.347 09:37:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:02.347 09:37:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.14 00:12:02.347 09:37:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.14 2 00:12:02.347 remove_attach_helper took 43.14s to complete (handling 2 nvme drive(s)) 09:37:39 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:08.909 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 83483 00:12:08.909 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (83483) - No such process 00:12:08.909 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 83483 00:12:08.909 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:08.909 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:08.909 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:08.910 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=84033 00:12:08.910 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:08.910 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:08.910 09:37:45 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 84033 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 84033 ']' 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.910 09:37:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.910 [2024-07-24 09:37:45.891683] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:12:08.910 [2024-07-24 09:37:45.891941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84033 ] 00:12:08.910 [2024-07-24 09:37:46.058939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.910 [2024-07-24 09:37:46.109294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:08.910 09:37:46 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:08.910 09:37:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.479 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.480 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.480 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.480 09:37:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.480 09:37:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.480 [2024-07-24 09:37:52.771279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:15.480 [2024-07-24 09:37:52.773432] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:52.773505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:52.773532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:52.773557] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:52.773572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:52.773592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:52.773607] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:52.773626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:52.773640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:52.773658] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:52.773671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:52.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 09:37:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.480 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:15.480 09:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.480 [2024-07-24 09:37:53.170637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:15.480 [2024-07-24 09:37:53.172819] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:53.172860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:53.172880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:53.172899] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:53.172912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:53.172924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:53.172940] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:53.172950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:53.172964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.480 [2024-07-24 09:37:53.172976] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.480 [2024-07-24 09:37:53.172993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.480 [2024-07-24 09:37:53.173005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.749 09:37:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.749 09:37:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.749 09:37:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.749 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.010 09:37:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.271 [2024-07-24 09:38:05.850375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:28.271 [2024-07-24 09:38:05.852704] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.271 [2024-07-24 09:38:05.852750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.271 [2024-07-24 09:38:05.852767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.271 [2024-07-24 09:38:05.852788] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.271 [2024-07-24 09:38:05.852800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.271 [2024-07-24 09:38:05.852814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.271 [2024-07-24 09:38:05.852826] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.271 [2024-07-24 09:38:05.852839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.271 [2024-07-24 09:38:05.852850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.271 [2024-07-24 09:38:05.852864] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.271 [2024-07-24 09:38:05.852875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.271 [2024-07-24 09:38:05.852889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.271 09:38:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:28.271 09:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:28.837 [2024-07-24 09:38:06.349568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.837 [2024-07-24 09:38:06.351749] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.837 [2024-07-24 09:38:06.351792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.837 [2024-07-24 09:38:06.351811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.837 [2024-07-24 09:38:06.351830] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.837 [2024-07-24 09:38:06.351844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.837 [2024-07-24 09:38:06.351856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.837 [2024-07-24 09:38:06.351870] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.837 [2024-07-24 09:38:06.351881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.837 [2024-07-24 09:38:06.351894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.837 [2024-07-24 09:38:06.351906] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.837 [2024-07-24 09:38:06.351919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.837 [2024-07-24 09:38:06.351931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.837 09:38:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.837 09:38:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.837 09:38:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.837 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.094 09:38:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.304 09:38:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.304 09:38:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.304 09:38:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.304 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.304 09:38:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.304 09:38:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.304 [2024-07-24 09:38:18.929384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:41.304 [2024-07-24 09:38:18.931551] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.304 [2024-07-24 09:38:18.931596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.304 [2024-07-24 09:38:18.931613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.304 [2024-07-24 09:38:18.931637] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.304 [2024-07-24 09:38:18.931652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.304 [2024-07-24 09:38:18.931666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.304 [2024-07-24 09:38:18.931678] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.304 [2024-07-24 09:38:18.931691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.305 [2024-07-24 09:38:18.931703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.305 [2024-07-24 09:38:18.931717] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.305 [2024-07-24 09:38:18.931727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.305 [2024-07-24 09:38:18.931741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.305 09:38:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.305 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:41.305 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.563 [2024-07-24 09:38:19.328754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:41.563 [2024-07-24 09:38:19.331007] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.563 [2024-07-24 09:38:19.331050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.563 [2024-07-24 09:38:19.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.563 [2024-07-24 09:38:19.331088] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.563 [2024-07-24 09:38:19.331102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.563 [2024-07-24 09:38:19.331114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.563 [2024-07-24 09:38:19.331132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.563 [2024-07-24 09:38:19.331142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.563 [2024-07-24 09:38:19.331157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.563 [2024-07-24 09:38:19.331169] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.563 [2024-07-24 09:38:19.331182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.563 [2024-07-24 09:38:19.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.822 09:38:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.822 09:38:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.822 09:38:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.822 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.080 09:38:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.24 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.24 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:12:54.299 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:54.299 09:38:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:54.299 09:38:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:00.860 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:00.860 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:00.860 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:00.860 09:38:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.860 09:38:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:00.860 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:00.860 [2024-07-24 09:38:38.045584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:00.860 [2024-07-24 09:38:38.048127] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.048215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.048250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.048300] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.048325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.048369] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.048397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.048452] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.048474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.048502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.048532] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:00.861 [2024-07-24 09:38:38.048561] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:00.861 [2024-07-24 09:38:38.048577] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:00.861 [2024-07-24 09:38:38.048600] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:00.861 09:38:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:00.861 [2024-07-24 09:38:38.444899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:00.861 [2024-07-24 09:38:38.446533] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.446574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.446612] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.446627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.446639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.446653] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.446664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.446682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 [2024-07-24 09:38:38.446695] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.861 [2024-07-24 09:38:38.446708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.861 [2024-07-24 09:38:38.446720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:00.861 09:38:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:00.861 09:38:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:00.861 09:38:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:00.861 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:01.119 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:01.376 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:01.376 09:38:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.596 09:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.596 09:38:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.596 09:38:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.596 09:38:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.596 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:13.596 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:13.596 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.596 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.596 [2024-07-24 09:38:51.024806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:13.596 [2024-07-24 09:38:51.026791] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.596 [2024-07-24 09:38:51.026842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.596 [2024-07-24 09:38:51.026869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.596 [2024-07-24 09:38:51.026898] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.596 [2024-07-24 09:38:51.026913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.596 [2024-07-24 09:38:51.026933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.596 [2024-07-24 09:38:51.026948] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.596 [2024-07-24 09:38:51.026965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.597 [2024-07-24 09:38:51.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.597 [2024-07-24 09:38:51.026998] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.597 [2024-07-24 09:38:51.027012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.597 [2024-07-24 09:38:51.027033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.597 09:38:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.597 09:38:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.597 09:38:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:13.597 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:13.855 [2024-07-24 09:38:51.424154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:13.855 [2024-07-24 09:38:51.425765] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.855 [2024-07-24 09:38:51.425809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.855 [2024-07-24 09:38:51.425828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.855 [2024-07-24 09:38:51.425848] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.855 [2024-07-24 09:38:51.425863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.855 [2024-07-24 09:38:51.425875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.855 [2024-07-24 09:38:51.425889] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.855 [2024-07-24 09:38:51.425901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.855 [2024-07-24 09:38:51.425915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.855 [2024-07-24 09:38:51.425927] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.855 [2024-07-24 09:38:51.425940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.855 [2024-07-24 09:38:51.425952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.855 09:38:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.855 09:38:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.855 09:38:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:13.855 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:14.138 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:14.398 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:14.398 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:14.398 09:38:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.602 09:39:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.602 09:39:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.602 09:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.602 09:39:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.602 09:39:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.602 09:39:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.602 [2024-07-24 09:39:04.103736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:26.602 [2024-07-24 09:39:04.105167] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.602 [2024-07-24 09:39:04.105224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.602 [2024-07-24 09:39:04.105242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.602 [2024-07-24 09:39:04.105263] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.602 [2024-07-24 09:39:04.105274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.602 [2024-07-24 09:39:04.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.602 [2024-07-24 09:39:04.105301] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.602 [2024-07-24 09:39:04.105315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.602 [2024-07-24 09:39:04.105326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.602 [2024-07-24 09:39:04.105340] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.602 [2024-07-24 09:39:04.105351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.602 [2024-07-24 09:39:04.105364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.602 09:39:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:26.602 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:26.860 [2024-07-24 09:39:04.503090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:26.860 [2024-07-24 09:39:04.504507] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.860 [2024-07-24 09:39:04.504546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.860 [2024-07-24 09:39:04.504565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.861 [2024-07-24 09:39:04.504581] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.861 [2024-07-24 09:39:04.504599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.861 [2024-07-24 09:39:04.504611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.861 [2024-07-24 09:39:04.504625] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.861 [2024-07-24 09:39:04.504636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.861 [2024-07-24 09:39:04.504650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.861 [2024-07-24 09:39:04.504662] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.861 [2024-07-24 09:39:04.504675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.861 [2024-07-24 09:39:04.504686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.861 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.861 09:39:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.861 09:39:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.861 09:39:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.119 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:27.119 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:27.119 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.120 09:39:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:27.378 09:39:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:27.378 09:39:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.379 09:39:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.13 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.13 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:13:39.637 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:39.637 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 84033 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 84033 ']' 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 84033 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84033 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.637 killing process with pid 84033 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84033' 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@969 -- # kill 84033 00:13:39.637 09:39:17 sw_hotplug -- common/autotest_common.sh@974 -- # wait 84033 00:13:39.896 09:39:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:40.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.031 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.031 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.031 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.031 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.289 00:13:41.289 real 2m31.048s 00:13:41.289 user 1m47.671s 00:13:41.289 sys 0m23.396s 00:13:41.289 09:39:18 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.289 ************************************ 00:13:41.289 END TEST sw_hotplug 00:13:41.289 ************************************ 00:13:41.289 09:39:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.289 09:39:18 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:13:41.289 09:39:18 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:41.289 09:39:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:41.289 09:39:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.289 09:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.289 ************************************ 00:13:41.289 START TEST nvme_xnvme 00:13:41.289 ************************************ 00:13:41.289 09:39:18 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:41.289 * Looking for test storage... 00:13:41.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:41.289 09:39:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.289 09:39:19 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.289 09:39:19 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.289 09:39:19 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.289 09:39:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.289 09:39:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.289 09:39:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.289 09:39:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:41.289 09:39:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.289 09:39:19 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:41.289 09:39:19 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:41.289 09:39:19 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.289 09:39:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.289 ************************************ 00:13:41.289 START TEST xnvme_to_malloc_dd_copy 00:13:41.289 ************************************ 00:13:41.289 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:41.289 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:41.289 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:41.289 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:41.548 09:39:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:41.548 { 00:13:41.548 "subsystems": [ 00:13:41.548 { 00:13:41.548 "subsystem": "bdev", 00:13:41.548 "config": [ 00:13:41.548 { 00:13:41.548 "params": { 00:13:41.548 "block_size": 512, 00:13:41.548 "num_blocks": 2097152, 00:13:41.548 "name": "malloc0" 00:13:41.548 }, 00:13:41.548 "method": "bdev_malloc_create" 00:13:41.548 }, 00:13:41.548 { 00:13:41.548 "params": { 00:13:41.548 "io_mechanism": "libaio", 00:13:41.548 "filename": "/dev/nullb0", 00:13:41.548 "name": "null0" 00:13:41.548 }, 00:13:41.548 "method": "bdev_xnvme_create" 00:13:41.548 }, 00:13:41.548 { 00:13:41.548 "method": "bdev_wait_for_examine" 00:13:41.548 } 00:13:41.548 ] 00:13:41.548 } 00:13:41.548 ] 00:13:41.548 } 00:13:41.548 [2024-07-24 09:39:19.198107] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:13:41.548 [2024-07-24 09:39:19.198320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85379 ] 00:13:41.548 [2024-07-24 09:39:19.363775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.808 [2024-07-24 09:39:19.405631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.594  Copying: 263/1024 [MB] (263 MBps) Copying: 526/1024 [MB] (263 MBps) Copying: 787/1024 [MB] (261 MBps) Copying: 1024/1024 [MB] (average 263 MBps) 00:13:46.594 00:13:46.594 09:39:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:46.594 09:39:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:46.594 09:39:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:46.594 09:39:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:46.594 { 00:13:46.594 "subsystems": [ 00:13:46.594 { 00:13:46.594 "subsystem": "bdev", 00:13:46.594 "config": [ 00:13:46.594 { 00:13:46.594 "params": { 00:13:46.594 "block_size": 512, 00:13:46.594 "num_blocks": 2097152, 00:13:46.594 "name": "malloc0" 00:13:46.594 }, 00:13:46.594 "method": "bdev_malloc_create" 00:13:46.594 }, 00:13:46.594 { 00:13:46.594 "params": { 00:13:46.594 "io_mechanism": "libaio", 00:13:46.594 "filename": "/dev/nullb0", 00:13:46.594 "name": "null0" 00:13:46.594 }, 00:13:46.594 "method": "bdev_xnvme_create" 00:13:46.594 }, 00:13:46.594 { 00:13:46.594 "method": "bdev_wait_for_examine" 00:13:46.594 } 00:13:46.594 ] 00:13:46.594 } 00:13:46.594 ] 00:13:46.594 } 00:13:46.594 [2024-07-24 09:39:24.255153] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:13:46.594 [2024-07-24 09:39:24.255274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85441 ] 00:13:46.852 [2024-07-24 09:39:24.422614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.852 [2024-07-24 09:39:24.462049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.599  Copying: 267/1024 [MB] (267 MBps) Copying: 530/1024 [MB] (263 MBps) Copying: 791/1024 [MB] (260 MBps) Copying: 1024/1024 [MB] (average 263 MBps) 00:13:51.599 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:51.599 09:39:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:51.599 { 00:13:51.599 "subsystems": [ 00:13:51.599 { 00:13:51.599 "subsystem": "bdev", 00:13:51.599 "config": [ 00:13:51.599 { 00:13:51.599 "params": { 00:13:51.599 "block_size": 512, 00:13:51.599 "num_blocks": 2097152, 00:13:51.599 "name": "malloc0" 00:13:51.599 }, 00:13:51.599 "method": "bdev_malloc_create" 00:13:51.599 }, 00:13:51.599 { 00:13:51.599 "params": { 00:13:51.599 "io_mechanism": "io_uring", 00:13:51.599 "filename": "/dev/nullb0", 00:13:51.599 "name": "null0" 00:13:51.599 }, 00:13:51.599 "method": "bdev_xnvme_create" 00:13:51.599 }, 00:13:51.599 { 00:13:51.600 "method": "bdev_wait_for_examine" 00:13:51.600 } 00:13:51.600 ] 00:13:51.600 } 00:13:51.600 ] 00:13:51.600 } 00:13:51.600 [2024-07-24 09:39:29.319397] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:13:51.600 [2024-07-24 09:39:29.319507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85506 ] 00:13:51.858 [2024-07-24 09:39:29.485802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.859 [2024-07-24 09:39:29.525906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.620  Copying: 270/1024 [MB] (270 MBps) Copying: 538/1024 [MB] (268 MBps) Copying: 805/1024 [MB] (266 MBps) Copying: 1024/1024 [MB] (average 268 MBps) 00:13:56.620 00:13:56.620 09:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:56.620 09:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:56.620 09:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:56.620 09:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:56.620 { 00:13:56.620 "subsystems": [ 00:13:56.620 { 00:13:56.620 "subsystem": "bdev", 00:13:56.620 "config": [ 00:13:56.620 { 00:13:56.620 "params": { 00:13:56.620 "block_size": 512, 00:13:56.620 "num_blocks": 2097152, 00:13:56.620 "name": "malloc0" 00:13:56.620 }, 00:13:56.620 "method": "bdev_malloc_create" 00:13:56.620 }, 00:13:56.620 { 00:13:56.620 "params": { 00:13:56.620 "io_mechanism": "io_uring", 00:13:56.620 "filename": "/dev/nullb0", 00:13:56.620 "name": "null0" 00:13:56.620 }, 00:13:56.621 "method": "bdev_xnvme_create" 00:13:56.621 }, 00:13:56.621 { 00:13:56.621 "method": "bdev_wait_for_examine" 00:13:56.621 } 00:13:56.621 ] 00:13:56.621 } 00:13:56.621 ] 00:13:56.621 } 00:13:56.621 [2024-07-24 09:39:34.267284] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:13:56.621 [2024-07-24 09:39:34.267469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85560 ] 00:13:56.878 [2024-07-24 09:39:34.440306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.878 [2024-07-24 09:39:34.486585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.319  Copying: 268/1024 [MB] (268 MBps) Copying: 540/1024 [MB] (271 MBps) Copying: 815/1024 [MB] (274 MBps) Copying: 1024/1024 [MB] (average 272 MBps) 00:14:01.319 00:14:01.319 09:39:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:01.319 09:39:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:01.319 00:14:01.319 real 0m20.049s 00:14:01.319 user 0m15.758s 00:14:01.319 sys 0m3.852s 00:14:01.319 09:39:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.319 09:39:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:01.319 ************************************ 00:14:01.319 END TEST xnvme_to_malloc_dd_copy 00:14:01.319 ************************************ 00:14:01.578 09:39:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:01.578 09:39:39 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:01.578 09:39:39 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.578 09:39:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.578 ************************************ 00:14:01.578 START TEST xnvme_bdevperf 00:14:01.578 ************************************ 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.579 09:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.579 { 00:14:01.579 "subsystems": [ 00:14:01.579 { 00:14:01.579 "subsystem": "bdev", 00:14:01.579 "config": [ 00:14:01.579 { 00:14:01.579 "params": { 00:14:01.579 "io_mechanism": "libaio", 00:14:01.579 "filename": "/dev/nullb0", 00:14:01.579 "name": "null0" 00:14:01.579 }, 00:14:01.579 "method": "bdev_xnvme_create" 00:14:01.579 }, 00:14:01.579 { 00:14:01.579 "method": "bdev_wait_for_examine" 00:14:01.579 } 00:14:01.579 ] 00:14:01.579 } 00:14:01.579 ] 00:14:01.579 } 00:14:01.579 [2024-07-24 09:39:39.323369] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:01.579 [2024-07-24 09:39:39.323503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85649 ] 00:14:01.839 [2024-07-24 09:39:39.491269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.839 [2024-07-24 09:39:39.532498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.839 Running I/O for 5 seconds... 00:14:07.147 00:14:07.147 Latency(us) 00:14:07.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.147 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.147 null0 : 5.00 162457.82 634.60 0.00 0.00 391.57 122.55 648.12 00:14:07.147 =================================================================================================================== 00:14:07.148 Total : 162457.82 634.60 0.00 0.00 391.57 122.55 648.12 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.148 09:39:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.148 { 00:14:07.148 "subsystems": [ 00:14:07.148 { 00:14:07.148 "subsystem": "bdev", 00:14:07.148 "config": [ 00:14:07.148 { 00:14:07.148 "params": { 00:14:07.148 "io_mechanism": "io_uring", 00:14:07.148 "filename": "/dev/nullb0", 00:14:07.148 "name": "null0" 00:14:07.148 }, 00:14:07.148 "method": "bdev_xnvme_create" 00:14:07.148 }, 00:14:07.148 { 00:14:07.148 "method": "bdev_wait_for_examine" 00:14:07.148 } 00:14:07.148 ] 00:14:07.148 } 00:14:07.148 ] 00:14:07.148 } 00:14:07.148 [2024-07-24 09:39:44.959458] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:07.148 [2024-07-24 09:39:44.960076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85718 ] 00:14:07.406 [2024-07-24 09:39:45.124980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.406 [2024-07-24 09:39:45.168792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.666 Running I/O for 5 seconds... 00:14:12.936 00:14:12.936 Latency(us) 00:14:12.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.936 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:12.936 null0 : 5.00 207486.17 810.49 0.00 0.00 306.07 190.00 756.69 00:14:12.936 =================================================================================================================== 00:14:12.936 Total : 207486.17 810.49 0.00 0.00 306.07 190.00 756.69 00:14:12.936 09:39:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:12.936 09:39:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:12.936 00:14:12.936 real 0m11.299s 00:14:12.936 user 0m8.020s 00:14:12.936 sys 0m3.080s 00:14:12.936 09:39:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.936 09:39:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.936 ************************************ 00:14:12.936 END TEST xnvme_bdevperf 00:14:12.936 ************************************ 00:14:12.936 00:14:12.936 real 0m31.631s 00:14:12.936 user 0m23.879s 00:14:12.936 sys 0m7.115s 00:14:12.936 09:39:50 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.936 09:39:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.936 ************************************ 00:14:12.936 END TEST nvme_xnvme 00:14:12.936 ************************************ 00:14:12.936 09:39:50 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:12.936 09:39:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:12.936 09:39:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.936 09:39:50 -- common/autotest_common.sh@10 -- # set +x 00:14:12.936 ************************************ 00:14:12.936 START TEST blockdev_xnvme 00:14:12.936 ************************************ 00:14:12.936 09:39:50 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:12.936 * Looking for test storage... 00:14:12.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:12.936 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:12.936 09:39:50 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85847 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:13.195 09:39:50 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85847 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 85847 ']' 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.195 09:39:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.195 [2024-07-24 09:39:50.870263] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:13.195 [2024-07-24 09:39:50.870389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85847 ] 00:14:13.531 [2024-07-24 09:39:51.038850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.531 [2024-07-24 09:39:51.081460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.098 09:39:51 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.098 09:39:51 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:14.098 09:39:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:14.098 09:39:51 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:14.098 09:39:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:14.098 09:39:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:14.099 09:39:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:14.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.615 Waiting for block devices as requested 00:14:14.874 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.874 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:15.132 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:15.132 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.405 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:20.405 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.405 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 nvme0n1 00:14:20.406 nvme1n1 00:14:20.406 nvme2n1 00:14:20.406 nvme2n2 00:14:20.406 nvme2n3 00:14:20.406 nvme3n1 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ea1f7cbb-4896-4f7a-be7a-20abf97e8015"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ea1f7cbb-4896-4f7a-be7a-20abf97e8015",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "927a37df-3030-40b4-9c3e-5ede487a2632"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "927a37df-3030-40b4-9c3e-5ede487a2632",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9353e957-b1b3-445a-a654-eac16b49a65e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9353e957-b1b3-445a-a654-eac16b49a65e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ae26fc8b-6410-4cb1-bb60-333bbf005095"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae26fc8b-6410-4cb1-bb60-333bbf005095",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "63fa7be0-22eb-4585-bd37-4989e6086578"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63fa7be0-22eb-4585-bd37-4989e6086578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1070a55d-8ba5-4ebd-95ab-2a66e55428d6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1070a55d-8ba5-4ebd-95ab-2a66e55428d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:20.406 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 85847 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 85847 ']' 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 85847 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85847 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.406 09:39:58 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85847' 00:14:20.407 killing process with pid 85847 00:14:20.407 09:39:58 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 85847 00:14:20.407 09:39:58 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 85847 00:14:20.974 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:20.974 09:39:58 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:20.974 09:39:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:20.974 09:39:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.974 09:39:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.974 ************************************ 00:14:20.974 START TEST bdev_hello_world 00:14:20.974 ************************************ 00:14:20.974 09:39:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:20.975 [2024-07-24 09:39:58.632811] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:20.975 [2024-07-24 09:39:58.632948] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86195 ] 00:14:21.233 [2024-07-24 09:39:58.800530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.233 [2024-07-24 09:39:58.842632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.233 [2024-07-24 09:39:59.021636] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:21.233 [2024-07-24 09:39:59.021694] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:21.233 [2024-07-24 09:39:59.021719] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:21.233 [2024-07-24 09:39:59.023993] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:21.233 [2024-07-24 09:39:59.024537] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:21.233 [2024-07-24 09:39:59.024578] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:21.233 [2024-07-24 09:39:59.024862] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:21.233 00:14:21.233 [2024-07-24 09:39:59.024893] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:21.491 00:14:21.491 real 0m0.701s 00:14:21.491 user 0m0.380s 00:14:21.491 sys 0m0.210s 00:14:21.491 09:39:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.491 09:39:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:21.491 ************************************ 00:14:21.491 END TEST bdev_hello_world 00:14:21.491 ************************************ 00:14:21.749 09:39:59 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:21.749 09:39:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.749 09:39:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.749 09:39:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.749 ************************************ 00:14:21.749 START TEST bdev_bounds 00:14:21.749 ************************************ 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=86226 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:21.749 Process bdevio pid: 86226 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 86226' 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 86226 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 86226 ']' 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.749 09:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:21.749 [2024-07-24 09:39:59.404779] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:21.749 [2024-07-24 09:39:59.404925] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86226 ] 00:14:22.014 [2024-07-24 09:39:59.572286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:22.014 [2024-07-24 09:39:59.617846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.014 [2024-07-24 09:39:59.617873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.014 [2024-07-24 09:39:59.617986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.595 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.595 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:22.595 09:40:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:22.595 I/O targets: 00:14:22.595 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:22.595 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:22.595 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:22.595 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:22.595 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:22.595 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:22.595 00:14:22.595 00:14:22.595 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.595 http://cunit.sourceforge.net/ 00:14:22.595 00:14:22.595 00:14:22.595 Suite: bdevio tests on: nvme3n1 00:14:22.595 Test: blockdev write read block ...passed 00:14:22.595 Test: blockdev write zeroes read block ...passed 00:14:22.595 Test: blockdev write zeroes read no split ...passed 00:14:22.595 Test: blockdev write zeroes read split ...passed 00:14:22.595 Test: blockdev write zeroes read split partial ...passed 00:14:22.595 Test: blockdev reset ...passed 00:14:22.595 Test: blockdev write read 8 blocks ...passed 00:14:22.595 Test: blockdev write read size > 128k ...passed 00:14:22.595 Test: blockdev write read invalid size ...passed 00:14:22.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.595 Test: blockdev write read max offset ...passed 00:14:22.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.595 Test: blockdev writev readv 8 blocks ...passed 00:14:22.595 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.595 Test: blockdev writev readv block ...passed 00:14:22.595 Test: blockdev writev readv size > 128k ...passed 00:14:22.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.595 Test: blockdev comparev and writev ...passed 00:14:22.595 Test: blockdev nvme passthru rw ...passed 00:14:22.595 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.595 Test: blockdev nvme admin passthru ...passed 00:14:22.595 Test: blockdev copy ...passed 00:14:22.595 Suite: bdevio tests on: nvme2n3 00:14:22.595 Test: blockdev write read block ...passed 00:14:22.595 Test: blockdev write zeroes read block ...passed 00:14:22.595 Test: blockdev write zeroes read no split ...passed 00:14:22.595 Test: blockdev write zeroes read split ...passed 00:14:22.595 Test: blockdev write zeroes read split partial ...passed 00:14:22.595 Test: blockdev reset ...passed 00:14:22.595 Test: blockdev write read 8 blocks ...passed 00:14:22.595 Test: blockdev write read size > 128k ...passed 00:14:22.595 Test: blockdev write read invalid size ...passed 00:14:22.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.595 Test: blockdev write read max offset ...passed 00:14:22.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.595 Test: blockdev writev readv 8 blocks ...passed 00:14:22.595 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.595 Test: blockdev writev readv block ...passed 00:14:22.595 Test: blockdev writev readv size > 128k ...passed 00:14:22.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.595 Test: blockdev comparev and writev ...passed 00:14:22.595 Test: blockdev nvme passthru rw ...passed 00:14:22.595 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.595 Test: blockdev nvme admin passthru ...passed 00:14:22.595 Test: blockdev copy ...passed 00:14:22.595 Suite: bdevio tests on: nvme2n2 00:14:22.595 Test: blockdev write read block ...passed 00:14:22.595 Test: blockdev write zeroes read block ...passed 00:14:22.595 Test: blockdev write zeroes read no split ...passed 00:14:22.595 Test: blockdev write zeroes read split ...passed 00:14:22.595 Test: blockdev write zeroes read split partial ...passed 00:14:22.595 Test: blockdev reset ...passed 00:14:22.595 Test: blockdev write read 8 blocks ...passed 00:14:22.595 Test: blockdev write read size > 128k ...passed 00:14:22.595 Test: blockdev write read invalid size ...passed 00:14:22.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.595 Test: blockdev write read max offset ...passed 00:14:22.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.595 Test: blockdev writev readv 8 blocks ...passed 00:14:22.595 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.595 Test: blockdev writev readv block ...passed 00:14:22.595 Test: blockdev writev readv size > 128k ...passed 00:14:22.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.595 Test: blockdev comparev and writev ...passed 00:14:22.595 Test: blockdev nvme passthru rw ...passed 00:14:22.595 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.595 Test: blockdev nvme admin passthru ...passed 00:14:22.595 Test: blockdev copy ...passed 00:14:22.595 Suite: bdevio tests on: nvme2n1 00:14:22.595 Test: blockdev write read block ...passed 00:14:22.595 Test: blockdev write zeroes read block ...passed 00:14:22.861 Test: blockdev write zeroes read no split ...passed 00:14:22.861 Test: blockdev write zeroes read split ...passed 00:14:22.861 Test: blockdev write zeroes read split partial ...passed 00:14:22.861 Test: blockdev reset ...passed 00:14:22.861 Test: blockdev write read 8 blocks ...passed 00:14:22.861 Test: blockdev write read size > 128k ...passed 00:14:22.861 Test: blockdev write read invalid size ...passed 00:14:22.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.862 Test: blockdev write read max offset ...passed 00:14:22.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.862 Test: blockdev writev readv 8 blocks ...passed 00:14:22.862 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.862 Test: blockdev writev readv block ...passed 00:14:22.862 Test: blockdev writev readv size > 128k ...passed 00:14:22.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.862 Test: blockdev comparev and writev ...passed 00:14:22.862 Test: blockdev nvme passthru rw ...passed 00:14:22.862 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.862 Test: blockdev nvme admin passthru ...passed 00:14:22.862 Test: blockdev copy ...passed 00:14:22.862 Suite: bdevio tests on: nvme1n1 00:14:22.862 Test: blockdev write read block ...passed 00:14:22.862 Test: blockdev write zeroes read block ...passed 00:14:22.862 Test: blockdev write zeroes read no split ...passed 00:14:22.862 Test: blockdev write zeroes read split ...passed 00:14:22.862 Test: blockdev write zeroes read split partial ...passed 00:14:22.862 Test: blockdev reset ...passed 00:14:22.862 Test: blockdev write read 8 blocks ...passed 00:14:22.862 Test: blockdev write read size > 128k ...passed 00:14:22.862 Test: blockdev write read invalid size ...passed 00:14:22.862 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.862 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.862 Test: blockdev write read max offset ...passed 00:14:22.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.862 Test: blockdev writev readv 8 blocks ...passed 00:14:22.862 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.862 Test: blockdev writev readv block ...passed 00:14:22.862 Test: blockdev writev readv size > 128k ...passed 00:14:22.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.862 Test: blockdev comparev and writev ...passed 00:14:22.862 Test: blockdev nvme passthru rw ...passed 00:14:22.862 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.862 Test: blockdev nvme admin passthru ...passed 00:14:22.862 Test: blockdev copy ...passed 00:14:22.862 Suite: bdevio tests on: nvme0n1 00:14:22.862 Test: blockdev write read block ...passed 00:14:22.862 Test: blockdev write zeroes read block ...passed 00:14:22.862 Test: blockdev write zeroes read no split ...passed 00:14:22.862 Test: blockdev write zeroes read split ...passed 00:14:22.862 Test: blockdev write zeroes read split partial ...passed 00:14:22.862 Test: blockdev reset ...passed 00:14:22.862 Test: blockdev write read 8 blocks ...passed 00:14:22.862 Test: blockdev write read size > 128k ...passed 00:14:22.862 Test: blockdev write read invalid size ...passed 00:14:22.862 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:22.862 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:22.862 Test: blockdev write read max offset ...passed 00:14:22.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:22.862 Test: blockdev writev readv 8 blocks ...passed 00:14:22.862 Test: blockdev writev readv 30 x 1block ...passed 00:14:22.862 Test: blockdev writev readv block ...passed 00:14:22.862 Test: blockdev writev readv size > 128k ...passed 00:14:22.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:22.862 Test: blockdev comparev and writev ...passed 00:14:22.862 Test: blockdev nvme passthru rw ...passed 00:14:22.862 Test: blockdev nvme passthru vendor specific ...passed 00:14:22.862 Test: blockdev nvme admin passthru ...passed 00:14:22.863 Test: blockdev copy ...passed 00:14:22.863 00:14:22.863 Run Summary: Type Total Ran Passed Failed Inactive 00:14:22.863 suites 6 6 n/a 0 0 00:14:22.863 tests 138 138 138 0 0 00:14:22.863 asserts 780 780 780 0 n/a 00:14:22.863 00:14:22.863 Elapsed time = 0.407 seconds 00:14:22.863 0 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 86226 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 86226 ']' 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 86226 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86226 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86226' 00:14:22.863 killing process with pid 86226 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 86226 00:14:22.863 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 86226 00:14:23.127 09:40:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:23.127 00:14:23.127 real 0m1.470s 00:14:23.128 user 0m3.490s 00:14:23.128 sys 0m0.383s 00:14:23.128 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.128 09:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:23.128 ************************************ 00:14:23.128 END TEST bdev_bounds 00:14:23.128 ************************************ 00:14:23.128 09:40:00 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:23.128 09:40:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:23.128 09:40:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.128 09:40:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.128 ************************************ 00:14:23.128 START TEST bdev_nbd 00:14:23.128 ************************************ 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=86271 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 86271 /var/tmp/spdk-nbd.sock 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 86271 ']' 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.128 09:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.387 [2024-07-24 09:40:00.958639] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:23.387 [2024-07-24 09:40:00.958763] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.387 [2024-07-24 09:40:01.126979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.387 [2024-07-24 09:40:01.172387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:23.953 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:24.211 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:24.211 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:24.211 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.212 1+0 records in 00:14:24.212 1+0 records out 00:14:24.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760339 s, 5.4 MB/s 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:24.212 09:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.470 1+0 records in 00:14:24.470 1+0 records out 00:14:24.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656904 s, 6.2 MB/s 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:24.470 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.728 1+0 records in 00:14:24.728 1+0 records out 00:14:24.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566775 s, 7.2 MB/s 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:24.728 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.987 1+0 records in 00:14:24.987 1+0 records out 00:14:24.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784355 s, 5.2 MB/s 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:24.987 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.245 1+0 records in 00:14:25.245 1+0 records out 00:14:25.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732374 s, 5.6 MB/s 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:25.245 09:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.504 1+0 records in 00:14:25.504 1+0 records out 00:14:25.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688311 s, 6.0 MB/s 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:25.504 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd0", 00:14:25.768 "bdev_name": "nvme0n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd1", 00:14:25.768 "bdev_name": "nvme1n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd2", 00:14:25.768 "bdev_name": "nvme2n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd3", 00:14:25.768 "bdev_name": "nvme2n2" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd4", 00:14:25.768 "bdev_name": "nvme2n3" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd5", 00:14:25.768 "bdev_name": "nvme3n1" 00:14:25.768 } 00:14:25.768 ]' 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd0", 00:14:25.768 "bdev_name": "nvme0n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd1", 00:14:25.768 "bdev_name": "nvme1n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd2", 00:14:25.768 "bdev_name": "nvme2n1" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd3", 00:14:25.768 "bdev_name": "nvme2n2" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd4", 00:14:25.768 "bdev_name": "nvme2n3" 00:14:25.768 }, 00:14:25.768 { 00:14:25.768 "nbd_device": "/dev/nbd5", 00:14:25.768 "bdev_name": "nvme3n1" 00:14:25.768 } 00:14:25.768 ]' 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.768 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:26.026 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.285 09:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.285 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.286 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.286 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.544 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:26.803 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:26.803 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.804 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:27.062 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:27.063 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.322 09:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:27.322 /dev/nbd0 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.322 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.323 1+0 records in 00:14:27.323 1+0 records out 00:14:27.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653861 s, 6.3 MB/s 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.323 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:27.581 /dev/nbd1 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.581 1+0 records in 00:14:27.581 1+0 records out 00:14:27.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637039 s, 6.4 MB/s 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.581 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:27.840 /dev/nbd10 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.840 1+0 records in 00:14:27.840 1+0 records out 00:14:27.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580386 s, 7.1 MB/s 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.840 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:28.099 /dev/nbd11 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.099 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.100 1+0 records in 00:14:28.100 1+0 records out 00:14:28.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674657 s, 6.1 MB/s 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:28.100 09:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:28.359 /dev/nbd12 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.359 1+0 records in 00:14:28.359 1+0 records out 00:14:28.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462355 s, 8.9 MB/s 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:28.359 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:28.618 /dev/nbd13 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.618 1+0 records in 00:14:28.618 1+0 records out 00:14:28.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958152 s, 4.3 MB/s 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.618 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd0", 00:14:28.878 "bdev_name": "nvme0n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd1", 00:14:28.878 "bdev_name": "nvme1n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd10", 00:14:28.878 "bdev_name": "nvme2n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd11", 00:14:28.878 "bdev_name": "nvme2n2" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd12", 00:14:28.878 "bdev_name": "nvme2n3" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd13", 00:14:28.878 "bdev_name": "nvme3n1" 00:14:28.878 } 00:14:28.878 ]' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd0", 00:14:28.878 "bdev_name": "nvme0n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd1", 00:14:28.878 "bdev_name": "nvme1n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd10", 00:14:28.878 "bdev_name": "nvme2n1" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd11", 00:14:28.878 "bdev_name": "nvme2n2" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd12", 00:14:28.878 "bdev_name": "nvme2n3" 00:14:28.878 }, 00:14:28.878 { 00:14:28.878 "nbd_device": "/dev/nbd13", 00:14:28.878 "bdev_name": "nvme3n1" 00:14:28.878 } 00:14:28.878 ]' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:28.878 /dev/nbd1 00:14:28.878 /dev/nbd10 00:14:28.878 /dev/nbd11 00:14:28.878 /dev/nbd12 00:14:28.878 /dev/nbd13' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:28.878 /dev/nbd1 00:14:28.878 /dev/nbd10 00:14:28.878 /dev/nbd11 00:14:28.878 /dev/nbd12 00:14:28.878 /dev/nbd13' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:28.878 256+0 records in 00:14:28.878 256+0 records out 00:14:28.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121836 s, 86.1 MB/s 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.878 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:29.137 256+0 records in 00:14:29.137 256+0 records out 00:14:29.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111637 s, 9.4 MB/s 00:14:29.137 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.137 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:29.138 256+0 records in 00:14:29.138 256+0 records out 00:14:29.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144316 s, 7.3 MB/s 00:14:29.138 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.138 09:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:29.396 256+0 records in 00:14:29.396 256+0 records out 00:14:29.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11768 s, 8.9 MB/s 00:14:29.396 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.396 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:29.396 256+0 records in 00:14:29.396 256+0 records out 00:14:29.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120475 s, 8.7 MB/s 00:14:29.397 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.397 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:29.655 256+0 records in 00:14:29.655 256+0 records out 00:14:29.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116783 s, 9.0 MB/s 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:29.656 256+0 records in 00:14:29.656 256+0 records out 00:14:29.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119461 s, 8.8 MB/s 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.656 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.915 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.174 09:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:30.433 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:30.433 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:30.433 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:30.433 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.434 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.692 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.951 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:31.210 09:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:31.468 malloc_lvol_verify 00:14:31.468 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:31.739 283e2381-ebaf-4161-9e1d-a23127e0ec77 00:14:31.739 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:32.019 fd785766-6838-47fd-9511-ea3f69b02caf 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:32.019 /dev/nbd0 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:32.019 mke2fs 1.46.5 (30-Dec-2021) 00:14:32.019 Discarding device blocks: 0/4096 done 00:14:32.019 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:32.019 00:14:32.019 Allocating group tables: 0/1 done 00:14:32.019 Writing inode tables: 0/1 done 00:14:32.019 Creating journal (1024 blocks): done 00:14:32.019 Writing superblocks and filesystem accounting information: 0/1 done 00:14:32.019 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.019 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 86271 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 86271 ']' 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 86271 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.278 09:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86271 00:14:32.278 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.278 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.278 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86271' 00:14:32.278 killing process with pid 86271 00:14:32.278 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 86271 00:14:32.278 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 86271 00:14:32.536 ************************************ 00:14:32.536 END TEST bdev_nbd 00:14:32.536 ************************************ 00:14:32.536 09:40:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:32.536 00:14:32.536 real 0m9.409s 00:14:32.536 user 0m12.362s 00:14:32.536 sys 0m4.293s 00:14:32.536 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.536 09:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 09:40:10 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:32.536 09:40:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:32.536 09:40:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:32.536 09:40:10 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:32.536 09:40:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.536 09:40:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.536 09:40:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 ************************************ 00:14:32.536 START TEST bdev_fio 00:14:32.536 ************************************ 00:14:32.536 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:32.536 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:32.536 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:32.536 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:32.536 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:32.794 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:32.795 ************************************ 00:14:32.795 START TEST bdev_fio_rw_verify 00:14:32.795 ************************************ 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.795 09:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.053 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.053 fio-3.35 00:14:33.053 Starting 6 threads 00:14:45.362 00:14:45.362 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86662: Wed Jul 24 09:40:21 2024 00:14:45.362 read: IOPS=32.4k, BW=126MiB/s (133MB/s)(1264MiB/10001msec) 00:14:45.362 slat (usec): min=2, max=1259, avg= 6.35, stdev= 4.64 00:14:45.362 clat (usec): min=87, max=9587, avg=598.03, stdev=203.63 00:14:45.362 lat (usec): min=93, max=9593, avg=604.39, stdev=204.35 00:14:45.362 clat percentiles (usec): 00:14:45.362 | 50.000th=[ 619], 99.000th=[ 1106], 99.900th=[ 1860], 99.990th=[ 3851], 00:14:45.362 | 99.999th=[ 9634] 00:14:45.362 write: IOPS=32.7k, BW=128MiB/s (134MB/s)(1278MiB/10001msec); 0 zone resets 00:14:45.362 slat (usec): min=7, max=3037, avg=20.42, stdev=25.28 00:14:45.362 clat (usec): min=78, max=5653, avg=665.80, stdev=210.00 00:14:45.362 lat (usec): min=93, max=5733, avg=686.22, stdev=212.64 00:14:45.362 clat percentiles (usec): 00:14:45.362 | 50.000th=[ 668], 99.000th=[ 1336], 99.900th=[ 2008], 99.990th=[ 2966], 00:14:45.362 | 99.999th=[ 5538] 00:14:45.362 bw ( KiB/s): min=102288, max=149441, per=100.00%, avg=130956.47, stdev=2402.63, samples=114 00:14:45.362 iops : min=25572, max=37360, avg=32739.00, stdev=600.65, samples=114 00:14:45.362 lat (usec) : 100=0.01%, 250=3.77%, 500=17.28%, 750=58.86%, 1000=16.88% 00:14:45.362 lat (msec) : 2=3.12%, 4=0.08%, 10=0.01% 00:14:45.362 cpu : usr=60.91%, sys=27.56%, ctx=7915, majf=0, minf=27062 00:14:45.362 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.362 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.362 issued rwts: total=323584,327129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.362 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:45.362 00:14:45.362 Run status group 0 (all jobs): 00:14:45.362 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=1264MiB (1325MB), run=10001-10001msec 00:14:45.362 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1278MiB (1340MB), run=10001-10001msec 00:14:45.362 ----------------------------------------------------- 00:14:45.362 Suppressions used: 00:14:45.362 count bytes template 00:14:45.362 6 48 /usr/src/fio/parse.c 00:14:45.362 3306 317376 /usr/src/fio/iolog.c 00:14:45.362 1 8 libtcmalloc_minimal.so 00:14:45.362 1 904 libcrypto.so 00:14:45.362 ----------------------------------------------------- 00:14:45.362 00:14:45.362 00:14:45.362 real 0m11.196s 00:14:45.362 user 0m37.269s 00:14:45.362 sys 0m16.906s 00:14:45.362 ************************************ 00:14:45.362 END TEST bdev_fio_rw_verify 00:14:45.362 ************************************ 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ea1f7cbb-4896-4f7a-be7a-20abf97e8015"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ea1f7cbb-4896-4f7a-be7a-20abf97e8015",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "927a37df-3030-40b4-9c3e-5ede487a2632"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "927a37df-3030-40b4-9c3e-5ede487a2632",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9353e957-b1b3-445a-a654-eac16b49a65e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9353e957-b1b3-445a-a654-eac16b49a65e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ae26fc8b-6410-4cb1-bb60-333bbf005095"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae26fc8b-6410-4cb1-bb60-333bbf005095",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "63fa7be0-22eb-4585-bd37-4989e6086578"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63fa7be0-22eb-4585-bd37-4989e6086578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1070a55d-8ba5-4ebd-95ab-2a66e55428d6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1070a55d-8ba5-4ebd-95ab-2a66e55428d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.362 /home/vagrant/spdk_repo/spdk 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:45.362 09:40:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:45.363 00:14:45.363 real 0m11.417s 00:14:45.363 user 0m37.377s 00:14:45.363 sys 0m17.022s 00:14:45.363 ************************************ 00:14:45.363 END TEST bdev_fio 00:14:45.363 ************************************ 00:14:45.363 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.363 09:40:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:45.363 09:40:21 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:45.363 09:40:21 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:45.363 09:40:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:45.363 09:40:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.363 09:40:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.363 ************************************ 00:14:45.363 START TEST bdev_verify 00:14:45.363 ************************************ 00:14:45.363 09:40:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:45.363 [2024-07-24 09:40:21.925151] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:45.363 [2024-07-24 09:40:21.925331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:14:45.363 [2024-07-24 09:40:22.093054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.363 [2024-07-24 09:40:22.140602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.363 [2024-07-24 09:40:22.140659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.363 Running I/O for 5 seconds... 00:14:50.629 00:14:50.629 Latency(us) 00:14:50.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.629 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0xa0000 00:14:50.629 nvme0n1 : 5.04 1701.03 6.64 0.00 0.00 75133.16 20424.07 61903.88 00:14:50.629 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0xa0000 length 0xa0000 00:14:50.629 nvme0n1 : 5.04 1879.21 7.34 0.00 0.00 67999.09 12738.72 62325.00 00:14:50.629 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0xbd0bd 00:14:50.629 nvme1n1 : 5.06 2626.88 10.26 0.00 0.00 48452.36 5316.58 62746.11 00:14:50.629 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:50.629 nvme1n1 : 5.04 2957.65 11.55 0.00 0.00 43072.26 6316.72 56008.28 00:14:50.629 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0x80000 00:14:50.629 nvme2n1 : 5.06 1721.32 6.72 0.00 0.00 73790.58 13159.84 73695.10 00:14:50.629 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x80000 length 0x80000 00:14:50.629 nvme2n1 : 5.06 1897.82 7.41 0.00 0.00 67012.90 12054.41 53902.70 00:14:50.629 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0x80000 00:14:50.629 nvme2n2 : 5.06 1720.53 6.72 0.00 0.00 73678.63 11685.94 70747.30 00:14:50.629 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x80000 length 0x80000 00:14:50.629 nvme2n2 : 5.05 1876.70 7.33 0.00 0.00 67653.58 14844.30 58534.97 00:14:50.629 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0x80000 00:14:50.629 nvme2n3 : 5.06 1719.78 6.72 0.00 0.00 73583.19 13054.56 66115.03 00:14:50.629 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x80000 length 0x80000 00:14:50.629 nvme2n3 : 5.06 1895.77 7.41 0.00 0.00 66868.11 2658.29 64430.57 00:14:50.629 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x0 length 0x20000 00:14:50.629 nvme3n1 : 5.07 1718.28 6.71 0.00 0.00 73569.39 3026.76 73695.10 00:14:50.629 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.629 Verification LBA range: start 0x20000 length 0x20000 00:14:50.629 nvme3n1 : 5.06 1896.61 7.41 0.00 0.00 66723.32 5421.85 64430.57 00:14:50.629 =================================================================================================================== 00:14:50.629 Total : 23611.59 92.23 0.00 0.00 64571.39 2658.29 73695.10 00:14:50.629 00:14:50.629 real 0m5.825s 00:14:50.629 user 0m8.442s 00:14:50.629 sys 0m2.140s 00:14:50.629 09:40:27 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.629 09:40:27 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:50.629 ************************************ 00:14:50.629 END TEST bdev_verify 00:14:50.629 ************************************ 00:14:50.629 09:40:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:50.629 09:40:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:50.629 09:40:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.629 09:40:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.629 ************************************ 00:14:50.629 START TEST bdev_verify_big_io 00:14:50.629 ************************************ 00:14:50.629 09:40:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:50.629 [2024-07-24 09:40:27.828487] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:50.629 [2024-07-24 09:40:27.828614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86921 ] 00:14:50.629 [2024-07-24 09:40:27.997414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.629 [2024-07-24 09:40:28.042608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.629 [2024-07-24 09:40:28.042661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.629 Running I/O for 5 seconds... 00:14:57.195 00:14:57.195 Latency(us) 00:14:57.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.195 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0xa000 00:14:57.195 nvme0n1 : 5.74 100.43 6.28 0.00 0.00 1232120.67 234139.86 2439097.27 00:14:57.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0xa000 length 0xa000 00:14:57.195 nvme0n1 : 5.71 133.06 8.32 0.00 0.00 917291.43 23582.43 1367781.06 00:14:57.195 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0xbd0b 00:14:57.195 nvme1n1 : 5.72 212.57 13.29 0.00 0.00 567884.75 8369.66 693997.29 00:14:57.195 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:57.195 nvme1n1 : 5.75 153.09 9.57 0.00 0.00 801612.95 35163.09 1448635.12 00:14:57.195 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0x8000 00:14:57.195 nvme2n1 : 5.72 139.81 8.74 0.00 0.00 839875.16 67799.49 1374518.90 00:14:57.195 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x8000 length 0x8000 00:14:57.195 nvme2n1 : 5.74 142.17 8.89 0.00 0.00 837353.87 58534.97 1536227.01 00:14:57.195 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0x8000 00:14:57.195 nvme2n2 : 5.73 161.27 10.08 0.00 0.00 725764.47 5053.38 1489062.14 00:14:57.195 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x8000 length 0x8000 00:14:57.195 nvme2n2 : 5.74 142.13 8.88 0.00 0.00 814474.84 76221.79 1219548.63 00:14:57.195 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0x8000 00:14:57.195 nvme2n3 : 5.73 173.09 10.82 0.00 0.00 654376.94 35794.76 859074.31 00:14:57.195 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x8000 length 0x8000 00:14:57.195 nvme2n3 : 5.75 174.43 10.90 0.00 0.00 648634.23 11949.13 1590129.71 00:14:57.195 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x0 length 0x2000 00:14:57.195 nvme3n1 : 5.74 203.54 12.72 0.00 0.00 545215.68 5106.02 1441897.28 00:14:57.195 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:57.195 Verification LBA range: start 0x2000 length 0x2000 00:14:57.195 nvme3n1 : 5.75 169.65 10.60 0.00 0.00 649826.20 2184.53 1030889.18 00:14:57.195 =================================================================================================================== 00:14:57.195 Total : 1905.23 119.08 0.00 0.00 737986.03 2184.53 2439097.27 00:14:57.195 00:14:57.195 real 0m6.547s 00:14:57.196 user 0m11.753s 00:14:57.196 sys 0m0.621s 00:14:57.196 ************************************ 00:14:57.196 END TEST bdev_verify_big_io 00:14:57.196 ************************************ 00:14:57.196 09:40:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.196 09:40:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.196 09:40:34 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.196 09:40:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:57.196 09:40:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.196 09:40:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.196 ************************************ 00:14:57.196 START TEST bdev_write_zeroes 00:14:57.196 ************************************ 00:14:57.196 09:40:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.196 [2024-07-24 09:40:34.445354] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:57.196 [2024-07-24 09:40:34.445493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87014 ] 00:14:57.196 [2024-07-24 09:40:34.601676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.196 [2024-07-24 09:40:34.643461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.196 Running I/O for 1 seconds... 00:14:58.132 00:14:58.132 Latency(us) 00:14:58.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.132 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme0n1 : 1.01 10932.04 42.70 0.00 0.00 11697.82 7632.71 25266.89 00:14:58.132 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme1n1 : 1.02 12199.88 47.66 0.00 0.00 10463.49 5448.17 17792.10 00:14:58.132 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme2n1 : 1.02 10943.67 42.75 0.00 0.00 11596.62 5158.66 24108.83 00:14:58.132 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme2n2 : 1.02 10931.62 42.70 0.00 0.00 11603.69 5474.49 24529.94 00:14:58.132 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme2n3 : 1.02 10920.18 42.66 0.00 0.00 11609.33 5737.69 24845.78 00:14:58.132 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:58.132 nvme3n1 : 1.02 10909.82 42.62 0.00 0.00 11613.04 5895.61 25161.61 00:14:58.133 =================================================================================================================== 00:14:58.133 Total : 66837.21 261.08 0.00 0.00 11412.39 5158.66 25266.89 00:14:58.391 00:14:58.391 real 0m1.740s 00:14:58.391 user 0m0.951s 00:14:58.391 sys 0m0.600s 00:14:58.391 09:40:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.391 09:40:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:58.391 ************************************ 00:14:58.391 END TEST bdev_write_zeroes 00:14:58.391 ************************************ 00:14:58.391 09:40:36 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:58.391 09:40:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:58.391 09:40:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.391 09:40:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.391 ************************************ 00:14:58.391 START TEST bdev_json_nonenclosed 00:14:58.391 ************************************ 00:14:58.391 09:40:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:58.650 [2024-07-24 09:40:36.241253] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:58.650 [2024-07-24 09:40:36.241395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87056 ] 00:14:58.650 [2024-07-24 09:40:36.408545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.650 [2024-07-24 09:40:36.451626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.650 [2024-07-24 09:40:36.451715] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:58.650 [2024-07-24 09:40:36.451745] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:58.650 [2024-07-24 09:40:36.451770] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:58.908 00:14:58.908 real 0m0.400s 00:14:58.908 user 0m0.163s 00:14:58.908 sys 0m0.134s 00:14:58.908 09:40:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.908 ************************************ 00:14:58.908 END TEST bdev_json_nonenclosed 00:14:58.908 ************************************ 00:14:58.908 09:40:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:58.908 09:40:36 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:58.908 09:40:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:58.908 09:40:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.908 09:40:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.908 ************************************ 00:14:58.908 START TEST bdev_json_nonarray 00:14:58.908 ************************************ 00:14:58.908 09:40:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:58.908 [2024-07-24 09:40:36.715839] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:14:58.908 [2024-07-24 09:40:36.715981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87076 ] 00:14:59.167 [2024-07-24 09:40:36.872657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.167 [2024-07-24 09:40:36.915618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.167 [2024-07-24 09:40:36.915716] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:59.167 [2024-07-24 09:40:36.915749] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:59.167 [2024-07-24 09:40:36.915761] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:59.425 00:14:59.425 real 0m0.390s 00:14:59.425 user 0m0.159s 00:14:59.425 sys 0m0.128s 00:14:59.425 09:40:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.425 09:40:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:59.425 ************************************ 00:14:59.425 END TEST bdev_json_nonarray 00:14:59.425 ************************************ 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:59.425 09:40:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:59.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:14.897 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.897 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.897 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.897 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.897 00:15:14.897 real 1m1.028s 00:15:14.897 user 1m24.074s 00:15:14.897 sys 0m41.637s 00:15:14.897 09:40:51 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.897 09:40:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.897 ************************************ 00:15:14.897 END TEST blockdev_xnvme 00:15:14.897 ************************************ 00:15:14.897 09:40:51 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:14.897 09:40:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:14.897 09:40:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.897 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:15:14.897 ************************************ 00:15:14.897 START TEST ublk 00:15:14.897 ************************************ 00:15:14.897 09:40:51 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:14.897 * Looking for test storage... 00:15:14.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:14.897 09:40:51 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:14.897 09:40:51 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:14.897 09:40:51 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:14.897 09:40:51 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:14.897 09:40:51 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:14.897 09:40:51 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:14.897 09:40:51 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:14.897 09:40:51 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:14.897 09:40:51 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:14.897 09:40:51 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:14.897 09:40:51 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.897 09:40:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.897 ************************************ 00:15:14.897 START TEST test_save_ublk_config 00:15:14.897 ************************************ 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=87380 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 87380 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 87380 ']' 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:14.897 09:40:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:14.897 [2024-07-24 09:40:51.959685] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:15:14.897 [2024-07-24 09:40:51.959810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87380 ] 00:15:14.897 [2024-07-24 09:40:52.129054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.897 [2024-07-24 09:40:52.183463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.156 [2024-07-24 09:40:52.825212] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:15.156 [2024-07-24 09:40:52.825509] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:15.156 malloc0 00:15:15.156 [2024-07-24 09:40:52.857329] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:15.156 [2024-07-24 09:40:52.857441] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:15.156 [2024-07-24 09:40:52.857457] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:15.156 [2024-07-24 09:40:52.857466] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:15.156 [2024-07-24 09:40:52.866301] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:15.156 [2024-07-24 09:40:52.866331] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:15.156 [2024-07-24 09:40:52.873227] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:15.156 [2024-07-24 09:40:52.873330] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:15.156 [2024-07-24 09:40:52.890229] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:15.156 0 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.156 09:40:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.414 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.414 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:15.414 "subsystems": [ 00:15:15.414 { 00:15:15.414 "subsystem": "keyring", 00:15:15.414 "config": [] 00:15:15.414 }, 00:15:15.414 { 00:15:15.414 "subsystem": "iobuf", 00:15:15.414 "config": [ 00:15:15.414 { 00:15:15.414 "method": "iobuf_set_options", 00:15:15.414 "params": { 00:15:15.414 "small_pool_count": 8192, 00:15:15.414 "large_pool_count": 1024, 00:15:15.414 "small_bufsize": 8192, 00:15:15.414 "large_bufsize": 135168 00:15:15.414 } 00:15:15.414 } 00:15:15.414 ] 00:15:15.414 }, 00:15:15.414 { 00:15:15.414 "subsystem": "sock", 00:15:15.414 "config": [ 00:15:15.414 { 00:15:15.414 "method": "sock_set_default_impl", 00:15:15.414 "params": { 00:15:15.414 "impl_name": "posix" 00:15:15.414 } 00:15:15.414 }, 00:15:15.414 { 00:15:15.414 "method": "sock_impl_set_options", 00:15:15.414 "params": { 00:15:15.414 "impl_name": "ssl", 00:15:15.414 "recv_buf_size": 4096, 00:15:15.414 "send_buf_size": 4096, 00:15:15.414 "enable_recv_pipe": true, 00:15:15.414 "enable_quickack": false, 00:15:15.414 "enable_placement_id": 0, 00:15:15.414 "enable_zerocopy_send_server": true, 00:15:15.414 "enable_zerocopy_send_client": false, 00:15:15.414 "zerocopy_threshold": 0, 00:15:15.414 "tls_version": 0, 00:15:15.414 "enable_ktls": false 00:15:15.414 } 00:15:15.414 }, 00:15:15.414 { 00:15:15.414 "method": "sock_impl_set_options", 00:15:15.414 "params": { 00:15:15.414 "impl_name": "posix", 00:15:15.414 "recv_buf_size": 2097152, 00:15:15.414 "send_buf_size": 2097152, 00:15:15.414 "enable_recv_pipe": true, 00:15:15.414 "enable_quickack": false, 00:15:15.414 "enable_placement_id": 0, 00:15:15.414 "enable_zerocopy_send_server": true, 00:15:15.414 "enable_zerocopy_send_client": false, 00:15:15.414 "zerocopy_threshold": 0, 00:15:15.415 "tls_version": 0, 00:15:15.415 "enable_ktls": false 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "vmd", 00:15:15.415 "config": [] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "accel", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "accel_set_options", 00:15:15.415 "params": { 00:15:15.415 "small_cache_size": 128, 00:15:15.415 "large_cache_size": 16, 00:15:15.415 "task_count": 2048, 00:15:15.415 "sequence_count": 2048, 00:15:15.415 "buf_count": 2048 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "bdev", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "bdev_set_options", 00:15:15.415 "params": { 00:15:15.415 "bdev_io_pool_size": 65535, 00:15:15.415 "bdev_io_cache_size": 256, 00:15:15.415 "bdev_auto_examine": true, 00:15:15.415 "iobuf_small_cache_size": 128, 00:15:15.415 "iobuf_large_cache_size": 16 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_raid_set_options", 00:15:15.415 "params": { 00:15:15.415 "process_window_size_kb": 1024, 00:15:15.415 "process_max_bandwidth_mb_sec": 0 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_iscsi_set_options", 00:15:15.415 "params": { 00:15:15.415 "timeout_sec": 30 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_nvme_set_options", 00:15:15.415 "params": { 00:15:15.415 "action_on_timeout": "none", 00:15:15.415 "timeout_us": 0, 00:15:15.415 "timeout_admin_us": 0, 00:15:15.415 "keep_alive_timeout_ms": 10000, 00:15:15.415 "arbitration_burst": 0, 00:15:15.415 "low_priority_weight": 0, 00:15:15.415 "medium_priority_weight": 0, 00:15:15.415 "high_priority_weight": 0, 00:15:15.415 "nvme_adminq_poll_period_us": 10000, 00:15:15.415 "nvme_ioq_poll_period_us": 0, 00:15:15.415 "io_queue_requests": 0, 00:15:15.415 "delay_cmd_submit": true, 00:15:15.415 "transport_retry_count": 4, 00:15:15.415 "bdev_retry_count": 3, 00:15:15.415 "transport_ack_timeout": 0, 00:15:15.415 "ctrlr_loss_timeout_sec": 0, 00:15:15.415 "reconnect_delay_sec": 0, 00:15:15.415 "fast_io_fail_timeout_sec": 0, 00:15:15.415 "disable_auto_failback": false, 00:15:15.415 "generate_uuids": false, 00:15:15.415 "transport_tos": 0, 00:15:15.415 "nvme_error_stat": false, 00:15:15.415 "rdma_srq_size": 0, 00:15:15.415 "io_path_stat": false, 00:15:15.415 "allow_accel_sequence": false, 00:15:15.415 "rdma_max_cq_size": 0, 00:15:15.415 "rdma_cm_event_timeout_ms": 0, 00:15:15.415 "dhchap_digests": [ 00:15:15.415 "sha256", 00:15:15.415 "sha384", 00:15:15.415 "sha512" 00:15:15.415 ], 00:15:15.415 "dhchap_dhgroups": [ 00:15:15.415 "null", 00:15:15.415 "ffdhe2048", 00:15:15.415 "ffdhe3072", 00:15:15.415 "ffdhe4096", 00:15:15.415 "ffdhe6144", 00:15:15.415 "ffdhe8192" 00:15:15.415 ] 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_nvme_set_hotplug", 00:15:15.415 "params": { 00:15:15.415 "period_us": 100000, 00:15:15.415 "enable": false 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_malloc_create", 00:15:15.415 "params": { 00:15:15.415 "name": "malloc0", 00:15:15.415 "num_blocks": 8192, 00:15:15.415 "block_size": 4096, 00:15:15.415 "physical_block_size": 4096, 00:15:15.415 "uuid": "ac6d536d-3b7e-4e40-b5fe-a3bed998478e", 00:15:15.415 "optimal_io_boundary": 0, 00:15:15.415 "md_size": 0, 00:15:15.415 "dif_type": 0, 00:15:15.415 "dif_is_head_of_md": false, 00:15:15.415 "dif_pi_format": 0 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "bdev_wait_for_examine" 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "scsi", 00:15:15.415 "config": null 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "scheduler", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "framework_set_scheduler", 00:15:15.415 "params": { 00:15:15.415 "name": "static" 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "vhost_scsi", 00:15:15.415 "config": [] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "vhost_blk", 00:15:15.415 "config": [] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "ublk", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "ublk_create_target", 00:15:15.415 "params": { 00:15:15.415 "cpumask": "1" 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "ublk_start_disk", 00:15:15.415 "params": { 00:15:15.415 "bdev_name": "malloc0", 00:15:15.415 "ublk_id": 0, 00:15:15.415 "num_queues": 1, 00:15:15.415 "queue_depth": 128 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "nbd", 00:15:15.415 "config": [] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "nvmf", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "nvmf_set_config", 00:15:15.415 "params": { 00:15:15.415 "discovery_filter": "match_any", 00:15:15.415 "admin_cmd_passthru": { 00:15:15.415 "identify_ctrlr": false 00:15:15.415 } 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "nvmf_set_max_subsystems", 00:15:15.415 "params": { 00:15:15.415 "max_subsystems": 1024 00:15:15.415 } 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "method": "nvmf_set_crdt", 00:15:15.415 "params": { 00:15:15.415 "crdt1": 0, 00:15:15.415 "crdt2": 0, 00:15:15.415 "crdt3": 0 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }, 00:15:15.415 { 00:15:15.415 "subsystem": "iscsi", 00:15:15.415 "config": [ 00:15:15.415 { 00:15:15.415 "method": "iscsi_set_options", 00:15:15.415 "params": { 00:15:15.415 "node_base": "iqn.2016-06.io.spdk", 00:15:15.415 "max_sessions": 128, 00:15:15.415 "max_connections_per_session": 2, 00:15:15.415 "max_queue_depth": 64, 00:15:15.415 "default_time2wait": 2, 00:15:15.415 "default_time2retain": 20, 00:15:15.415 "first_burst_length": 8192, 00:15:15.415 "immediate_data": true, 00:15:15.415 "allow_duplicated_isid": false, 00:15:15.415 "error_recovery_level": 0, 00:15:15.415 "nop_timeout": 60, 00:15:15.415 "nop_in_interval": 30, 00:15:15.415 "disable_chap": false, 00:15:15.415 "require_chap": false, 00:15:15.415 "mutual_chap": false, 00:15:15.415 "chap_group": 0, 00:15:15.415 "max_large_datain_per_connection": 64, 00:15:15.415 "max_r2t_per_connection": 4, 00:15:15.415 "pdu_pool_size": 36864, 00:15:15.415 "immediate_data_pool_size": 16384, 00:15:15.415 "data_out_pool_size": 2048 00:15:15.415 } 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 } 00:15:15.415 ] 00:15:15.415 }' 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 87380 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 87380 ']' 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 87380 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87380 00:15:15.415 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.416 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.416 killing process with pid 87380 00:15:15.416 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87380' 00:15:15.416 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 87380 00:15:15.416 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 87380 00:15:15.714 [2024-07-24 09:40:53.506975] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:15.988 [2024-07-24 09:40:53.543299] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:15.988 [2024-07-24 09:40:53.543447] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:15.988 [2024-07-24 09:40:53.551260] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:15.988 [2024-07-24 09:40:53.551312] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:15.988 [2024-07-24 09:40:53.551324] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:15.988 [2024-07-24 09:40:53.551355] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:15.988 [2024-07-24 09:40:53.551504] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:16.246 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=87418 00:15:16.246 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 87418 00:15:16.246 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:16.246 "subsystems": [ 00:15:16.246 { 00:15:16.246 "subsystem": "keyring", 00:15:16.246 "config": [] 00:15:16.246 }, 00:15:16.246 { 00:15:16.246 "subsystem": "iobuf", 00:15:16.246 "config": [ 00:15:16.246 { 00:15:16.246 "method": "iobuf_set_options", 00:15:16.246 "params": { 00:15:16.246 "small_pool_count": 8192, 00:15:16.246 "large_pool_count": 1024, 00:15:16.246 "small_bufsize": 8192, 00:15:16.246 "large_bufsize": 135168 00:15:16.246 } 00:15:16.246 } 00:15:16.246 ] 00:15:16.246 }, 00:15:16.246 { 00:15:16.246 "subsystem": "sock", 00:15:16.246 "config": [ 00:15:16.246 { 00:15:16.246 "method": "sock_set_default_impl", 00:15:16.246 "params": { 00:15:16.246 "impl_name": "posix" 00:15:16.246 } 00:15:16.246 }, 00:15:16.246 { 00:15:16.246 "method": "sock_impl_set_options", 00:15:16.246 "params": { 00:15:16.246 "impl_name": "ssl", 00:15:16.246 "recv_buf_size": 4096, 00:15:16.246 "send_buf_size": 4096, 00:15:16.246 "enable_recv_pipe": true, 00:15:16.246 "enable_quickack": false, 00:15:16.246 "enable_placement_id": 0, 00:15:16.246 "enable_zerocopy_send_server": true, 00:15:16.246 "enable_zerocopy_send_client": false, 00:15:16.246 "zerocopy_threshold": 0, 00:15:16.246 "tls_version": 0, 00:15:16.246 "enable_ktls": false 00:15:16.246 } 00:15:16.246 }, 00:15:16.246 { 00:15:16.246 "method": "sock_impl_set_options", 00:15:16.246 "params": { 00:15:16.246 "impl_name": "posix", 00:15:16.246 "recv_buf_size": 2097152, 00:15:16.246 "send_buf_size": 2097152, 00:15:16.246 "enable_recv_pipe": true, 00:15:16.246 "enable_quickack": false, 00:15:16.246 "enable_placement_id": 0, 00:15:16.246 "enable_zerocopy_send_server": true, 00:15:16.246 "enable_zerocopy_send_client": false, 00:15:16.246 "zerocopy_threshold": 0, 00:15:16.246 "tls_version": 0, 00:15:16.247 "enable_ktls": false 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "vmd", 00:15:16.247 "config": [] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "accel", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "accel_set_options", 00:15:16.247 "params": { 00:15:16.247 "small_cache_size": 128, 00:15:16.247 "large_cache_size": 16, 00:15:16.247 "task_count": 2048, 00:15:16.247 "sequence_count": 2048, 00:15:16.247 "buf_count": 2048 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "bdev", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "bdev_set_options", 00:15:16.247 "params": { 00:15:16.247 "bdev_io_pool_size": 65535, 00:15:16.247 "bdev_io_cache_size": 256, 00:15:16.247 "bdev_auto_examine": true, 00:15:16.247 "iobuf_small_cache_size": 128, 00:15:16.247 "iobuf_large_cache_size": 16 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_raid_set_options", 00:15:16.247 "params": { 00:15:16.247 "process_window_size_kb": 1024, 00:15:16.247 "process_max_bandwidth_mb_sec": 0 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_iscsi_set_options", 00:15:16.247 "params": { 00:15:16.247 "timeout_sec": 30 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_nvme_set_options", 00:15:16.247 "params": { 00:15:16.247 "action_on_timeout": "none", 00:15:16.247 "timeout_us": 0, 00:15:16.247 "timeout_admin_us": 0, 00:15:16.247 "keep_alive_timeout_ms": 10000, 00:15:16.247 "arbitration_burst": 0, 00:15:16.247 "low_priority_weight": 0, 00:15:16.247 "medium_priority_weight": 0, 00:15:16.247 "high_priority_weight": 0, 00:15:16.247 "nvme_adminq_poll_period_us": 10000, 00:15:16.247 "nvme_ioq_poll_period_us": 0, 00:15:16.247 "io_queue_requests": 0, 00:15:16.247 "delay_cmd_submit": true, 00:15:16.247 "transport_retry_count": 4, 00:15:16.247 "bdev_retry_count": 3, 00:15:16.247 "transport_ack_timeout": 0, 00:15:16.247 "ctrlr_loss_timeout_sec": 0, 00:15:16.247 "reconnect_delay_sec": 0, 00:15:16.247 "fast_io_fail_timeout_sec": 0, 00:15:16.247 "disable_auto_failback": false, 00:15:16.247 "generate_uuids": false, 00:15:16.247 "transport_tos": 0, 00:15:16.247 "nvme_error_stat": false, 00:15:16.247 "rdma_srq_size": 0, 00:15:16.247 "io_path_stat": false, 00:15:16.247 "allow_accel_sequence": false, 00:15:16.247 "rdma_max_cq_size": 0, 00:15:16.247 "rdma_cm_event_timeout_ms": 0, 00:15:16.247 "dhchap_digests": [ 00:15:16.247 "sha256", 00:15:16.247 "sha384", 00:15:16.247 "sha512" 00:15:16.247 ], 00:15:16.247 "dhchap_dhgroups": [ 00:15:16.247 "null", 00:15:16.247 "ffdhe2048", 00:15:16.247 "ffdhe3072", 00:15:16.247 "ffdhe4096", 00:15:16.247 "ffdhe6144", 00:15:16.247 "ffdhe8192" 00:15:16.247 ] 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_nvme_set_hotplug", 00:15:16.247 "params": { 00:15:16.247 "period_us": 100000, 00:15:16.247 "enable": false 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_malloc_create", 00:15:16.247 "params": { 00:15:16.247 "name": "malloc0", 00:15:16.247 "num_blocks": 8192, 00:15:16.247 "block_size": 4096, 00:15:16.247 "physical_block_size": 4096, 00:15:16.247 "uuid": "ac6d536d-3b7e-4e40-b5fe-a3bed998478e", 00:15:16.247 "optimal_io_boundary": 0, 00:15:16.247 "md_size": 0, 00:15:16.247 "dif_type": 0, 00:15:16.247 "dif_is_head_of_md": false, 00:15:16.247 "dif_pi_format": 0 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "bdev_wait_for_examine" 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "scsi", 00:15:16.247 "config": null 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "scheduler", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "framework_set_scheduler", 00:15:16.247 "params": { 00:15:16.247 "name": "static" 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "vhost_scsi", 00:15:16.247 "config": [] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "vhost_blk", 00:15:16.247 "config": [] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "ublk", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "ublk_create_target", 00:15:16.247 "params": { 00:15:16.247 "cpumask": "1" 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "ublk_start_disk", 00:15:16.247 "params": { 00:15:16.247 "bdev_name": "malloc0", 00:15:16.247 "ublk_id": 0, 00:15:16.247 "num_queues": 1, 00:15:16.247 "queue_depth": 128 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "nbd", 00:15:16.247 "config": [] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "nvmf", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "nvmf_set_config", 00:15:16.247 "params": { 00:15:16.247 "discovery_filter": "match_any", 00:15:16.247 "admin_cmd_passthru": { 00:15:16.247 "identify_ctrlr": false 00:15:16.247 } 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "nvmf_set_max_subsystems", 00:15:16.247 "params": { 00:15:16.247 "max_subsystems": 1024 00:15:16.247 } 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "method": "nvmf_set_crdt", 00:15:16.247 "params": { 00:15:16.247 "crdt1": 0, 00:15:16.247 "crdt2": 0, 00:15:16.247 "crdt3": 0 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }, 00:15:16.247 { 00:15:16.247 "subsystem": "iscsi", 00:15:16.247 "config": [ 00:15:16.247 { 00:15:16.247 "method": "iscsi_set_options", 00:15:16.247 "params": { 00:15:16.247 "node_base": "iqn.2016-06.io.spdk", 00:15:16.247 "max_sessions": 128, 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 87418 ']' 00:15:16.247 "max_connections_per_session": 2, 00:15:16.247 "max_queue_depth": 64, 00:15:16.247 "default_time2wait": 2, 00:15:16.247 "default_time2retain": 20, 00:15:16.247 "first_burst_length": 8192, 00:15:16.247 "immediate_data": true, 00:15:16.247 "allow_duplicated_isid": false, 00:15:16.247 "error_recovery_level": 0, 00:15:16.247 "nop_timeout": 60, 00:15:16.247 "nop_in_interval": 30, 00:15:16.247 "disable_chap": false, 00:15:16.247 "require_chap": false, 00:15:16.247 "mutual_chap": false, 00:15:16.247 "chap_group": 0, 00:15:16.247 "max_large_datain_per_connection": 64, 00:15:16.247 "max_r2t_per_connection": 4, 00:15:16.247 "pdu_pool_size": 36864, 00:15:16.247 "immediate_data_pool_size": 16384, 00:15:16.247 "data_out_pool_size": 2048 00:15:16.247 } 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 } 00:15:16.247 ] 00:15:16.247 }' 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.247 09:40:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 [2024-07-24 09:40:53.917901] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:15:16.247 [2024-07-24 09:40:53.918044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87418 ] 00:15:16.505 [2024-07-24 09:40:54.085138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.505 [2024-07-24 09:40:54.140665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.763 [2024-07-24 09:40:54.478206] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:16.763 [2024-07-24 09:40:54.478510] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:16.763 [2024-07-24 09:40:54.486333] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:16.763 [2024-07-24 09:40:54.486426] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:16.763 [2024-07-24 09:40:54.486441] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:16.763 [2024-07-24 09:40:54.486449] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:16.763 [2024-07-24 09:40:54.495273] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:16.763 [2024-07-24 09:40:54.495300] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:16.763 [2024-07-24 09:40:54.502242] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:16.763 [2024-07-24 09:40:54.502339] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:16.763 [2024-07-24 09:40:54.519207] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 87418 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 87418 ']' 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 87418 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87418 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87418' 00:15:17.021 killing process with pid 87418 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 87418 00:15:17.021 09:40:54 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 87418 00:15:17.586 [2024-07-24 09:40:55.117060] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.586 [2024-07-24 09:40:55.155236] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.586 [2024-07-24 09:40:55.155397] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.586 [2024-07-24 09:40:55.163225] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.586 [2024-07-24 09:40:55.163282] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:17.586 [2024-07-24 09:40:55.163291] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:17.586 [2024-07-24 09:40:55.163319] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.586 [2024-07-24 09:40:55.163470] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:17.844 09:40:55 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:17.844 00:15:17.844 real 0m3.568s 00:15:17.844 user 0m2.714s 00:15:17.844 sys 0m1.606s 00:15:17.844 09:40:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.844 09:40:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.844 ************************************ 00:15:17.844 END TEST test_save_ublk_config 00:15:17.844 ************************************ 00:15:17.844 09:40:55 ublk -- ublk/ublk.sh@139 -- # spdk_pid=87463 00:15:17.844 09:40:55 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:17.844 09:40:55 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.844 09:40:55 ublk -- ublk/ublk.sh@141 -- # waitforlisten 87463 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@831 -- # '[' -z 87463 ']' 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.844 09:40:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.844 [2024-07-24 09:40:55.583519] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:15:17.844 [2024-07-24 09:40:55.583660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87463 ] 00:15:18.102 [2024-07-24 09:40:55.749270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:18.102 [2024-07-24 09:40:55.795078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.102 [2024-07-24 09:40:55.795179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.668 09:40:56 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.668 09:40:56 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:18.668 09:40:56 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:18.668 09:40:56 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:18.668 09:40:56 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.668 09:40:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 ************************************ 00:15:18.668 START TEST test_create_ublk 00:15:18.668 ************************************ 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:18.668 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 [2024-07-24 09:40:56.385211] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:18.668 [2024-07-24 09:40:56.386583] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.668 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:18.668 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.668 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:18.668 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.668 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 [2024-07-24 09:40:56.465368] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:18.668 [2024-07-24 09:40:56.465809] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:18.668 [2024-07-24 09:40:56.465830] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:18.668 [2024-07-24 09:40:56.465839] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:18.668 [2024-07-24 09:40:56.473518] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:18.668 [2024-07-24 09:40:56.473544] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:18.668 [2024-07-24 09:40:56.481250] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:18.926 [2024-07-24 09:40:56.490273] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:18.926 [2024-07-24 09:40:56.506235] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:18.926 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:18.926 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.926 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.926 09:40:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:18.926 { 00:15:18.926 "ublk_device": "/dev/ublkb0", 00:15:18.926 "id": 0, 00:15:18.926 "queue_depth": 512, 00:15:18.926 "num_queues": 4, 00:15:18.926 "bdev_name": "Malloc0" 00:15:18.926 } 00:15:18.926 ]' 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:18.926 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:19.184 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:19.184 09:40:56 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:19.184 09:40:56 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:19.184 fio: verification read phase will never start because write phase uses all of runtime 00:15:19.184 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:19.184 fio-3.35 00:15:19.184 Starting 1 process 00:15:31.380 00:15:31.380 fio_test: (groupid=0, jobs=1): err= 0: pid=87508: Wed Jul 24 09:41:07 2024 00:15:31.380 write: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(654MiB/10001msec); 0 zone resets 00:15:31.380 clat (usec): min=37, max=5576, avg=58.95, stdev=99.18 00:15:31.380 lat (usec): min=37, max=5577, avg=59.36, stdev=99.19 00:15:31.380 clat percentiles (usec): 00:15:31.380 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 53], 00:15:31.380 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 56], 00:15:31.380 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 62], 00:15:31.380 | 99.00th=[ 69], 99.50th=[ 74], 99.90th=[ 1975], 99.95th=[ 2835], 00:15:31.380 | 99.99th=[ 3654] 00:15:31.380 bw ( KiB/s): min=65144, max=74778, per=100.00%, avg=67097.79, stdev=2559.94, samples=19 00:15:31.380 iops : min=16286, max=18694, avg=16774.42, stdev=639.90, samples=19 00:15:31.380 lat (usec) : 50=5.39%, 100=94.39%, 250=0.02%, 500=0.01%, 750=0.01% 00:15:31.380 lat (usec) : 1000=0.02% 00:15:31.380 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:15:31.380 cpu : usr=2.84%, sys=10.79%, ctx=167429, majf=0, minf=796 00:15:31.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.380 issued rwts: total=0,167427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.380 00:15:31.380 Run status group 0 (all jobs): 00:15:31.380 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=654MiB (686MB), run=10001-10001msec 00:15:31.380 00:15:31.380 Disk stats (read/write): 00:15:31.380 ublkb0: ios=0/165719, merge=0/0, ticks=0/8623, in_queue=8624, util=99.08% 00:15:31.380 09:41:07 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.380 [2024-07-24 09:41:07.029466] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.380 [2024-07-24 09:41:07.065253] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.380 [2024-07-24 09:41:07.066171] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.380 [2024-07-24 09:41:07.069232] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.380 [2024-07-24 09:41:07.069531] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:31.380 [2024-07-24 09:41:07.069549] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.380 09:41:07 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.380 [2024-07-24 09:41:07.078524] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:31.380 request: 00:15:31.380 { 00:15:31.380 "ublk_id": 0, 00:15:31.380 "method": "ublk_stop_disk", 00:15:31.380 "req_id": 1 00:15:31.380 } 00:15:31.380 Got JSON-RPC error response 00:15:31.380 response: 00:15:31.380 { 00:15:31.380 "code": -19, 00:15:31.380 "message": "No such device" 00:15:31.380 } 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:31.380 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.381 09:41:07 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 [2024-07-24 09:41:07.095308] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.381 [2024-07-24 09:41:07.097627] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:31.381 [2024-07-24 09:41:07.097672] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:31.381 09:41:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:31.381 00:15:31.381 real 0m10.894s 00:15:31.381 user 0m0.687s 00:15:31.381 sys 0m1.197s 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 ************************************ 00:15:31.381 END TEST test_create_ublk 00:15:31.381 ************************************ 00:15:31.381 09:41:07 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:31.381 09:41:07 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:31.381 09:41:07 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.381 09:41:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 ************************************ 00:15:31.381 START TEST test_create_multi_ublk 00:15:31.381 ************************************ 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 [2024-07-24 09:41:07.347208] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:31.381 [2024-07-24 09:41:07.348585] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 [2024-07-24 09:41:07.442362] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:31.381 [2024-07-24 09:41:07.442835] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:31.381 [2024-07-24 09:41:07.442853] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:31.381 [2024-07-24 09:41:07.442864] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:31.381 [2024-07-24 09:41:07.450238] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:31.381 [2024-07-24 09:41:07.450267] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:31.381 [2024-07-24 09:41:07.458224] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:31.381 [2024-07-24 09:41:07.458810] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:31.381 [2024-07-24 09:41:07.477209] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 [2024-07-24 09:41:07.560353] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:31.381 [2024-07-24 09:41:07.560798] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:31.381 [2024-07-24 09:41:07.560818] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:31.381 [2024-07-24 09:41:07.560826] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:31.381 [2024-07-24 09:41:07.569463] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:31.381 [2024-07-24 09:41:07.569489] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:31.381 [2024-07-24 09:41:07.576231] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:31.381 [2024-07-24 09:41:07.576827] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:31.381 [2024-07-24 09:41:07.585281] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:31.381 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.382 [2024-07-24 09:41:07.675361] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:31.382 [2024-07-24 09:41:07.675867] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:31.382 [2024-07-24 09:41:07.675885] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:31.382 [2024-07-24 09:41:07.675896] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:31.382 [2024-07-24 09:41:07.683246] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:31.382 [2024-07-24 09:41:07.683277] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:31.382 [2024-07-24 09:41:07.691230] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:31.382 [2024-07-24 09:41:07.691881] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:31.382 [2024-07-24 09:41:07.694719] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.382 [2024-07-24 09:41:07.779346] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:31.382 [2024-07-24 09:41:07.779798] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:31.382 [2024-07-24 09:41:07.779819] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:31.382 [2024-07-24 09:41:07.779828] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:31.382 [2024-07-24 09:41:07.787257] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:31.382 [2024-07-24 09:41:07.787277] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:31.382 [2024-07-24 09:41:07.795231] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:31.382 [2024-07-24 09:41:07.795809] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:31.382 [2024-07-24 09:41:07.804270] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:31.382 { 00:15:31.382 "ublk_device": "/dev/ublkb0", 00:15:31.382 "id": 0, 00:15:31.382 "queue_depth": 512, 00:15:31.382 "num_queues": 4, 00:15:31.382 "bdev_name": "Malloc0" 00:15:31.382 }, 00:15:31.382 { 00:15:31.382 "ublk_device": "/dev/ublkb1", 00:15:31.382 "id": 1, 00:15:31.382 "queue_depth": 512, 00:15:31.382 "num_queues": 4, 00:15:31.382 "bdev_name": "Malloc1" 00:15:31.382 }, 00:15:31.382 { 00:15:31.382 "ublk_device": "/dev/ublkb2", 00:15:31.382 "id": 2, 00:15:31.382 "queue_depth": 512, 00:15:31.382 "num_queues": 4, 00:15:31.382 "bdev_name": "Malloc2" 00:15:31.382 }, 00:15:31.382 { 00:15:31.382 "ublk_device": "/dev/ublkb3", 00:15:31.382 "id": 3, 00:15:31.382 "queue_depth": 512, 00:15:31.382 "num_queues": 4, 00:15:31.382 "bdev_name": "Malloc3" 00:15:31.382 } 00:15:31.382 ]' 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:31.382 09:41:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:31.382 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 [2024-07-24 09:41:08.694321] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.383 [2024-07-24 09:41:08.733612] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.383 [2024-07-24 09:41:08.735072] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.383 [2024-07-24 09:41:08.745266] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.383 [2024-07-24 09:41:08.745550] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:31.383 [2024-07-24 09:41:08.745565] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 [2024-07-24 09:41:08.761322] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.383 [2024-07-24 09:41:08.797273] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.383 [2024-07-24 09:41:08.798426] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.383 [2024-07-24 09:41:08.805235] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.383 [2024-07-24 09:41:08.805525] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:31.383 [2024-07-24 09:41:08.805540] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 [2024-07-24 09:41:08.821374] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.383 [2024-07-24 09:41:08.856280] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.383 [2024-07-24 09:41:08.860548] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.383 [2024-07-24 09:41:08.868240] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.383 [2024-07-24 09:41:08.868505] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:31.383 [2024-07-24 09:41:08.868518] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 [2024-07-24 09:41:08.880331] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.383 [2024-07-24 09:41:08.918260] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.383 [2024-07-24 09:41:08.922546] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.383 [2024-07-24 09:41:08.933259] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.383 [2024-07-24 09:41:08.933564] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:31.383 [2024-07-24 09:41:08.933583] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.383 09:41:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:31.383 [2024-07-24 09:41:09.133348] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.383 [2024-07-24 09:41:09.138508] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:31.383 [2024-07-24 09:41:09.138551] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:31.383 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:31.383 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.383 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:31.383 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.383 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:31.641 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:31.898 09:41:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:31.898 00:15:31.898 real 0m2.168s 00:15:31.898 user 0m1.030s 00:15:31.898 sys 0m0.221s 00:15:31.898 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.898 09:41:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.898 ************************************ 00:15:31.898 END TEST test_create_multi_ublk 00:15:31.898 ************************************ 00:15:31.898 09:41:09 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:31.898 09:41:09 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:31.898 09:41:09 ublk -- ublk/ublk.sh@130 -- # killprocess 87463 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@950 -- # '[' -z 87463 ']' 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@954 -- # kill -0 87463 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@955 -- # uname 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87463 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87463' 00:15:31.898 killing process with pid 87463 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@969 -- # kill 87463 00:15:31.898 09:41:09 ublk -- common/autotest_common.sh@974 -- # wait 87463 00:15:32.157 [2024-07-24 09:41:09.734575] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.157 [2024-07-24 09:41:09.734643] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:32.415 00:15:32.415 real 0m18.251s 00:15:32.415 user 0m28.978s 00:15:32.415 sys 0m7.493s 00:15:32.415 09:41:09 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.415 ************************************ 00:15:32.415 END TEST ublk 00:15:32.415 ************************************ 00:15:32.415 09:41:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 09:41:10 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:32.415 09:41:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:32.415 09:41:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.415 09:41:10 -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 ************************************ 00:15:32.415 START TEST ublk_recovery 00:15:32.415 ************************************ 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:32.415 * Looking for test storage... 00:15:32.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:32.415 09:41:10 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87808 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:32.415 09:41:10 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87808 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 87808 ']' 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.415 09:41:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.673 [2024-07-24 09:41:10.272388] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:15:32.673 [2024-07-24 09:41:10.272511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87808 ] 00:15:32.673 [2024-07-24 09:41:10.440959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:32.673 [2024-07-24 09:41:10.487658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.673 [2024-07-24 09:41:10.487755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:33.606 09:41:11 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.606 [2024-07-24 09:41:11.067217] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:33.606 [2024-07-24 09:41:11.068588] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.606 09:41:11 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.606 malloc0 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.606 09:41:11 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.606 [2024-07-24 09:41:11.123347] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:33.606 [2024-07-24 09:41:11.123473] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:33.606 [2024-07-24 09:41:11.123488] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:33.606 [2024-07-24 09:41:11.123497] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:33.606 [2024-07-24 09:41:11.132300] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:33.606 [2024-07-24 09:41:11.132323] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:33.606 [2024-07-24 09:41:11.139222] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:33.606 [2024-07-24 09:41:11.139368] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:33.606 [2024-07-24 09:41:11.154216] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:33.606 1 00:15:33.606 09:41:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.606 09:41:11 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:34.538 09:41:12 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87841 00:15:34.538 09:41:12 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:34.538 09:41:12 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:34.538 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.538 fio-3.35 00:15:34.538 Starting 1 process 00:15:39.801 09:41:17 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87808 00:15:39.801 09:41:17 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:45.064 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87808 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:45.064 09:41:22 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87949 00:15:45.064 09:41:22 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:45.064 09:41:22 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.064 09:41:22 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87949 00:15:45.064 09:41:22 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 87949 ']' 00:15:45.064 09:41:22 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.064 09:41:22 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.065 09:41:22 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.065 09:41:22 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.065 09:41:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.065 [2024-07-24 09:41:22.286945] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:15:45.065 [2024-07-24 09:41:22.287101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87949 ] 00:15:45.065 [2024-07-24 09:41:22.455819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:45.065 [2024-07-24 09:41:22.507282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.065 [2024-07-24 09:41:22.507410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:45.323 09:41:23 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.323 [2024-07-24 09:41:23.092214] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:45.323 [2024-07-24 09:41:23.093620] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.323 09:41:23 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.323 malloc0 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.323 09:41:23 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.323 09:41:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.323 [2024-07-24 09:41:23.140403] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:45.323 [2024-07-24 09:41:23.140455] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:45.323 [2024-07-24 09:41:23.140467] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:45.581 [2024-07-24 09:41:23.148286] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:45.581 [2024-07-24 09:41:23.148322] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:45.581 [2024-07-24 09:41:23.148443] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:45.581 1 00:15:45.581 09:41:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.581 09:41:23 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87841 00:15:45.581 [2024-07-24 09:41:23.156233] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:45.581 [2024-07-24 09:41:23.163865] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:45.581 [2024-07-24 09:41:23.171479] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:45.581 [2024-07-24 09:41:23.171508] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:41.797 00:16:41.797 fio_test: (groupid=0, jobs=1): err= 0: pid=87844: Wed Jul 24 09:42:12 2024 00:16:41.797 read: IOPS=22.8k, BW=89.2MiB/s (93.5MB/s)(5352MiB/60002msec) 00:16:41.797 slat (nsec): min=1977, max=3087.0k, avg=7073.78, stdev=3303.37 00:16:41.797 clat (usec): min=1105, max=6007.3k, avg=2721.75, stdev=38383.85 00:16:41.797 lat (usec): min=1113, max=6007.3k, avg=2728.82, stdev=38383.85 00:16:41.797 clat percentiles (usec): 00:16:41.797 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2245], 00:16:41.798 | 30.00th=[ 2278], 40.00th=[ 2311], 50.00th=[ 2311], 60.00th=[ 2343], 00:16:41.798 | 70.00th=[ 2376], 80.00th=[ 2442], 90.00th=[ 2999], 95.00th=[ 3654], 00:16:41.798 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6783], 99.95th=[ 7308], 00:16:41.798 | 99.99th=[12649] 00:16:41.798 bw ( KiB/s): min=23672, max=107128, per=100.00%, avg=100648.74, stdev=10893.75, samples=108 00:16:41.798 iops : min= 5918, max=26782, avg=25162.17, stdev=2723.44, samples=108 00:16:41.798 write: IOPS=22.8k, BW=89.1MiB/s (93.4MB/s)(5345MiB/60002msec); 0 zone resets 00:16:41.798 slat (nsec): min=1999, max=176228, avg=7107.28, stdev=2023.95 00:16:41.798 clat (usec): min=1152, max=6007.8k, avg=2872.14, stdev=43549.37 00:16:41.798 lat (usec): min=1160, max=6007.8k, avg=2879.25, stdev=43549.37 00:16:41.798 clat percentiles (usec): 00:16:41.798 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:16:41.798 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:16:41.798 | 70.00th=[ 2507], 80.00th=[ 2540], 90.00th=[ 3064], 95.00th=[ 3654], 00:16:41.798 | 99.00th=[ 4948], 99.50th=[ 5473], 99.90th=[ 6980], 99.95th=[ 7504], 00:16:41.798 | 99.99th=[12911] 00:16:41.798 bw ( KiB/s): min=24248, max=106320, per=100.00%, avg=100516.89, stdev=10690.14, samples=108 00:16:41.798 iops : min= 6062, max=26580, avg=25129.20, stdev=2672.53, samples=108 00:16:41.798 lat (msec) : 2=1.69%, 4=94.80%, 10=3.50%, 20=0.01%, >=2000=0.01% 00:16:41.798 cpu : usr=12.59%, sys=31.37%, ctx=117331, majf=0, minf=13 00:16:41.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:41.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:41.798 issued rwts: total=1369985,1368382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:41.798 00:16:41.798 Run status group 0 (all jobs): 00:16:41.798 READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=5352MiB (5611MB), run=60002-60002msec 00:16:41.798 WRITE: bw=89.1MiB/s (93.4MB/s), 89.1MiB/s-89.1MiB/s (93.4MB/s-93.4MB/s), io=5345MiB (5605MB), run=60002-60002msec 00:16:41.798 00:16:41.798 Disk stats (read/write): 00:16:41.798 ublkb1: ios=1367254/1365755, merge=0/0, ticks=3617598/3684072, in_queue=7301670, util=99.93% 00:16:41.798 09:42:12 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.798 [2024-07-24 09:42:12.438430] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:41.798 [2024-07-24 09:42:12.482236] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:41.798 [2024-07-24 09:42:12.482544] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:41.798 [2024-07-24 09:42:12.490240] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:41.798 [2024-07-24 09:42:12.490383] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:41.798 [2024-07-24 09:42:12.490393] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.798 09:42:12 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.798 [2024-07-24 09:42:12.506322] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:41.798 [2024-07-24 09:42:12.508227] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:41.798 [2024-07-24 09:42:12.508277] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.798 09:42:12 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:41.798 09:42:12 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:41.798 09:42:12 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87949 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 87949 ']' 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 87949 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87949 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.798 killing process with pid 87949 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87949' 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@969 -- # kill 87949 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@974 -- # wait 87949 00:16:41.798 [2024-07-24 09:42:12.692162] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:41.798 [2024-07-24 09:42:12.692239] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:41.798 00:16:41.798 real 1m2.915s 00:16:41.798 user 1m45.718s 00:16:41.798 sys 0m35.839s 00:16:41.798 ************************************ 00:16:41.798 END TEST ublk_recovery 00:16:41.798 ************************************ 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.798 09:42:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.798 09:42:13 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@264 -- # timing_exit lib 00:16:41.798 09:42:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.798 09:42:13 -- common/autotest_common.sh@10 -- # set +x 00:16:41.798 09:42:13 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:16:41.798 09:42:13 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:41.798 09:42:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:41.798 09:42:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.798 09:42:13 -- common/autotest_common.sh@10 -- # set +x 00:16:41.798 ************************************ 00:16:41.798 START TEST ftl 00:16:41.798 ************************************ 00:16:41.798 09:42:13 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:41.798 * Looking for test storage... 00:16:41.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:41.798 09:42:13 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:41.798 09:42:13 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.798 09:42:13 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.798 09:42:13 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:41.798 09:42:13 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:41.798 09:42:13 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.798 09:42:13 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.798 09:42:13 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.798 09:42:13 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.798 09:42:13 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.798 09:42:13 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:41.798 09:42:13 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:41.798 09:42:13 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.798 09:42:13 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.798 09:42:13 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:41.798 09:42:13 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.798 09:42:13 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.798 09:42:13 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.798 09:42:13 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.798 09:42:13 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:41.798 09:42:13 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:41.798 09:42:13 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.798 09:42:13 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:41.798 09:42:13 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:41.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:41.798 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:41.798 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:41.798 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:41.798 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:41.798 09:42:14 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:41.798 09:42:14 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88736 00:16:41.798 09:42:14 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88736 00:16:41.798 09:42:14 ftl -- common/autotest_common.sh@831 -- # '[' -z 88736 ']' 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:41.799 [2024-07-24 09:42:14.179659] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:16:41.799 [2024-07-24 09:42:14.180024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88736 ] 00:16:41.799 [2024-07-24 09:42:14.340242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.799 [2024-07-24 09:42:14.389158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.799 09:42:14 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:41.799 09:42:14 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:41.799 09:42:15 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:41.799 09:42:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:41.799 09:42:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@50 -- # break 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@63 -- # break 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@66 -- # killprocess 88736 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@950 -- # '[' -z 88736 ']' 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@954 -- # kill -0 88736 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@955 -- # uname 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88736 00:16:41.799 killing process with pid 88736 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88736' 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@969 -- # kill 88736 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@974 -- # wait 88736 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:41.799 09:42:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.799 09:42:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:41.799 ************************************ 00:16:41.799 START TEST ftl_fio_basic 00:16:41.799 ************************************ 00:16:41.799 09:42:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:41.799 * Looking for test storage... 00:16:41.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.799 09:42:16 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:41.799 09:42:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:41.799 09:42:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.799 09:42:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88845 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88845 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 88845 ']' 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.799 [2024-07-24 09:42:17.140979] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:16:41.799 [2024-07-24 09:42:17.141435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88845 ] 00:16:41.799 [2024-07-24 09:42:17.307565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.799 [2024-07-24 09:42:17.358642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.799 [2024-07-24 09:42:17.358738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.799 [2024-07-24 09:42:17.358840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:41.799 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:41.800 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:41.800 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:41.800 09:42:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:41.800 { 00:16:41.800 "name": "nvme0n1", 00:16:41.800 "aliases": [ 00:16:41.800 "7943dc36-ee23-4f06-963b-f500a4d4e2a4" 00:16:41.800 ], 00:16:41.800 "product_name": "NVMe disk", 00:16:41.800 "block_size": 4096, 00:16:41.800 "num_blocks": 1310720, 00:16:41.800 "uuid": "7943dc36-ee23-4f06-963b-f500a4d4e2a4", 00:16:41.800 "assigned_rate_limits": { 00:16:41.800 "rw_ios_per_sec": 0, 00:16:41.800 "rw_mbytes_per_sec": 0, 00:16:41.800 "r_mbytes_per_sec": 0, 00:16:41.800 "w_mbytes_per_sec": 0 00:16:41.800 }, 00:16:41.800 "claimed": false, 00:16:41.800 "zoned": false, 00:16:41.800 "supported_io_types": { 00:16:41.800 "read": true, 00:16:41.800 "write": true, 00:16:41.800 "unmap": true, 00:16:41.800 "flush": true, 00:16:41.800 "reset": true, 00:16:41.800 "nvme_admin": true, 00:16:41.800 "nvme_io": true, 00:16:41.800 "nvme_io_md": false, 00:16:41.800 "write_zeroes": true, 00:16:41.800 "zcopy": false, 00:16:41.800 "get_zone_info": false, 00:16:41.800 "zone_management": false, 00:16:41.800 "zone_append": false, 00:16:41.800 "compare": true, 00:16:41.800 "compare_and_write": false, 00:16:41.800 "abort": true, 00:16:41.800 "seek_hole": false, 00:16:41.800 "seek_data": false, 00:16:41.800 "copy": true, 00:16:41.800 "nvme_iov_md": false 00:16:41.800 }, 00:16:41.800 "driver_specific": { 00:16:41.800 "nvme": [ 00:16:41.800 { 00:16:41.800 "pci_address": "0000:00:11.0", 00:16:41.800 "trid": { 00:16:41.800 "trtype": "PCIe", 00:16:41.800 "traddr": "0000:00:11.0" 00:16:41.800 }, 00:16:41.800 "ctrlr_data": { 00:16:41.800 "cntlid": 0, 00:16:41.800 "vendor_id": "0x1b36", 00:16:41.800 "model_number": "QEMU NVMe Ctrl", 00:16:41.800 "serial_number": "12341", 00:16:41.800 "firmware_revision": "8.0.0", 00:16:41.800 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:41.800 "oacs": { 00:16:41.800 "security": 0, 00:16:41.800 "format": 1, 00:16:41.800 "firmware": 0, 00:16:41.800 "ns_manage": 1 00:16:41.800 }, 00:16:41.800 "multi_ctrlr": false, 00:16:41.800 "ana_reporting": false 00:16:41.800 }, 00:16:41.800 "vs": { 00:16:41.800 "nvme_version": "1.4" 00:16:41.800 }, 00:16:41.800 "ns_data": { 00:16:41.800 "id": 1, 00:16:41.800 "can_share": false 00:16:41.800 } 00:16:41.800 } 00:16:41.800 ], 00:16:41.800 "mp_policy": "active_passive" 00:16:41.800 } 00:16:41.800 } 00:16:41.800 ]' 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=773e420f-013b-433e-8730-2c19838290b2 00:16:41.800 09:42:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 773e420f-013b-433e-8730-2c19838290b2 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:41.800 { 00:16:41.800 "name": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:41.800 "aliases": [ 00:16:41.800 "lvs/nvme0n1p0" 00:16:41.800 ], 00:16:41.800 "product_name": "Logical Volume", 00:16:41.800 "block_size": 4096, 00:16:41.800 "num_blocks": 26476544, 00:16:41.800 "uuid": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:41.800 "assigned_rate_limits": { 00:16:41.800 "rw_ios_per_sec": 0, 00:16:41.800 "rw_mbytes_per_sec": 0, 00:16:41.800 "r_mbytes_per_sec": 0, 00:16:41.800 "w_mbytes_per_sec": 0 00:16:41.800 }, 00:16:41.800 "claimed": false, 00:16:41.800 "zoned": false, 00:16:41.800 "supported_io_types": { 00:16:41.800 "read": true, 00:16:41.800 "write": true, 00:16:41.800 "unmap": true, 00:16:41.800 "flush": false, 00:16:41.800 "reset": true, 00:16:41.800 "nvme_admin": false, 00:16:41.800 "nvme_io": false, 00:16:41.800 "nvme_io_md": false, 00:16:41.800 "write_zeroes": true, 00:16:41.800 "zcopy": false, 00:16:41.800 "get_zone_info": false, 00:16:41.800 "zone_management": false, 00:16:41.800 "zone_append": false, 00:16:41.800 "compare": false, 00:16:41.800 "compare_and_write": false, 00:16:41.800 "abort": false, 00:16:41.800 "seek_hole": true, 00:16:41.800 "seek_data": true, 00:16:41.800 "copy": false, 00:16:41.800 "nvme_iov_md": false 00:16:41.800 }, 00:16:41.800 "driver_specific": { 00:16:41.800 "lvol": { 00:16:41.800 "lvol_store_uuid": "773e420f-013b-433e-8730-2c19838290b2", 00:16:41.800 "base_bdev": "nvme0n1", 00:16:41.800 "thin_provision": true, 00:16:41.800 "num_allocated_clusters": 0, 00:16:41.800 "snapshot": false, 00:16:41.800 "clone": false, 00:16:41.800 "esnap_clone": false 00:16:41.800 } 00:16:41.800 } 00:16:41.800 } 00:16:41.800 ]' 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=eda1015d-5594-4795-8aed-495dfc76231d 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.800 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:41.801 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:41.801 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eda1015d-5594-4795-8aed-495dfc76231d 00:16:42.059 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:42.059 { 00:16:42.059 "name": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:42.059 "aliases": [ 00:16:42.059 "lvs/nvme0n1p0" 00:16:42.059 ], 00:16:42.059 "product_name": "Logical Volume", 00:16:42.059 "block_size": 4096, 00:16:42.059 "num_blocks": 26476544, 00:16:42.059 "uuid": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:42.059 "assigned_rate_limits": { 00:16:42.059 "rw_ios_per_sec": 0, 00:16:42.059 "rw_mbytes_per_sec": 0, 00:16:42.059 "r_mbytes_per_sec": 0, 00:16:42.059 "w_mbytes_per_sec": 0 00:16:42.059 }, 00:16:42.059 "claimed": false, 00:16:42.059 "zoned": false, 00:16:42.059 "supported_io_types": { 00:16:42.059 "read": true, 00:16:42.059 "write": true, 00:16:42.059 "unmap": true, 00:16:42.059 "flush": false, 00:16:42.059 "reset": true, 00:16:42.059 "nvme_admin": false, 00:16:42.059 "nvme_io": false, 00:16:42.059 "nvme_io_md": false, 00:16:42.059 "write_zeroes": true, 00:16:42.059 "zcopy": false, 00:16:42.059 "get_zone_info": false, 00:16:42.059 "zone_management": false, 00:16:42.059 "zone_append": false, 00:16:42.059 "compare": false, 00:16:42.059 "compare_and_write": false, 00:16:42.059 "abort": false, 00:16:42.059 "seek_hole": true, 00:16:42.059 "seek_data": true, 00:16:42.059 "copy": false, 00:16:42.059 "nvme_iov_md": false 00:16:42.059 }, 00:16:42.059 "driver_specific": { 00:16:42.059 "lvol": { 00:16:42.059 "lvol_store_uuid": "773e420f-013b-433e-8730-2c19838290b2", 00:16:42.059 "base_bdev": "nvme0n1", 00:16:42.059 "thin_provision": true, 00:16:42.059 "num_allocated_clusters": 0, 00:16:42.059 "snapshot": false, 00:16:42.059 "clone": false, 00:16:42.059 "esnap_clone": false 00:16:42.059 } 00:16:42.059 } 00:16:42.059 } 00:16:42.059 ]' 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:42.060 09:42:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:42.318 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size eda1015d-5594-4795-8aed-495dfc76231d 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=eda1015d-5594-4795-8aed-495dfc76231d 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:42.318 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eda1015d-5594-4795-8aed-495dfc76231d 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:42.576 { 00:16:42.576 "name": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:42.576 "aliases": [ 00:16:42.576 "lvs/nvme0n1p0" 00:16:42.576 ], 00:16:42.576 "product_name": "Logical Volume", 00:16:42.576 "block_size": 4096, 00:16:42.576 "num_blocks": 26476544, 00:16:42.576 "uuid": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:42.576 "assigned_rate_limits": { 00:16:42.576 "rw_ios_per_sec": 0, 00:16:42.576 "rw_mbytes_per_sec": 0, 00:16:42.576 "r_mbytes_per_sec": 0, 00:16:42.576 "w_mbytes_per_sec": 0 00:16:42.576 }, 00:16:42.576 "claimed": false, 00:16:42.576 "zoned": false, 00:16:42.576 "supported_io_types": { 00:16:42.576 "read": true, 00:16:42.576 "write": true, 00:16:42.576 "unmap": true, 00:16:42.576 "flush": false, 00:16:42.576 "reset": true, 00:16:42.576 "nvme_admin": false, 00:16:42.576 "nvme_io": false, 00:16:42.576 "nvme_io_md": false, 00:16:42.576 "write_zeroes": true, 00:16:42.576 "zcopy": false, 00:16:42.576 "get_zone_info": false, 00:16:42.576 "zone_management": false, 00:16:42.576 "zone_append": false, 00:16:42.576 "compare": false, 00:16:42.576 "compare_and_write": false, 00:16:42.576 "abort": false, 00:16:42.576 "seek_hole": true, 00:16:42.576 "seek_data": true, 00:16:42.576 "copy": false, 00:16:42.576 "nvme_iov_md": false 00:16:42.576 }, 00:16:42.576 "driver_specific": { 00:16:42.576 "lvol": { 00:16:42.576 "lvol_store_uuid": "773e420f-013b-433e-8730-2c19838290b2", 00:16:42.576 "base_bdev": "nvme0n1", 00:16:42.576 "thin_provision": true, 00:16:42.576 "num_allocated_clusters": 0, 00:16:42.576 "snapshot": false, 00:16:42.576 "clone": false, 00:16:42.576 "esnap_clone": false 00:16:42.576 } 00:16:42.576 } 00:16:42.576 } 00:16:42.576 ]' 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:42.576 09:42:20 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eda1015d-5594-4795-8aed-495dfc76231d -c nvc0n1p0 --l2p_dram_limit 60 00:16:42.835 [2024-07-24 09:42:20.566076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.566137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.835 [2024-07-24 09:42:20.566162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:42.835 [2024-07-24 09:42:20.566173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.566297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.566311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.835 [2024-07-24 09:42:20.566325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:42.835 [2024-07-24 09:42:20.566335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.566387] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.835 [2024-07-24 09:42:20.566713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.835 [2024-07-24 09:42:20.566743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.566765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.835 [2024-07-24 09:42:20.566780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:16:42.835 [2024-07-24 09:42:20.566802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.566908] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aca6e3d2-fc75-447f-af6f-4077a564eb06 00:16:42.835 [2024-07-24 09:42:20.568397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.568438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:42.835 [2024-07-24 09:42:20.568451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:42.835 [2024-07-24 09:42:20.568480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.575939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.575986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.835 [2024-07-24 09:42:20.576010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.386 ms 00:16:42.835 [2024-07-24 09:42:20.576023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.576139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.576158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.835 [2024-07-24 09:42:20.576169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:42.835 [2024-07-24 09:42:20.576204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.576302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.576320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.835 [2024-07-24 09:42:20.576331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:42.835 [2024-07-24 09:42:20.576346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.576381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.835 [2024-07-24 09:42:20.578170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.578222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.835 [2024-07-24 09:42:20.578249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:16:42.835 [2024-07-24 09:42:20.578260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.578305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.578316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.835 [2024-07-24 09:42:20.578329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.835 [2024-07-24 09:42:20.578354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.578387] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:42.835 [2024-07-24 09:42:20.578535] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.835 [2024-07-24 09:42:20.578568] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.835 [2024-07-24 09:42:20.578582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:42.835 [2024-07-24 09:42:20.578600] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.835 [2024-07-24 09:42:20.578613] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.835 [2024-07-24 09:42:20.578626] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:42.835 [2024-07-24 09:42:20.578635] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.835 [2024-07-24 09:42:20.578648] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.835 [2024-07-24 09:42:20.578670] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.835 [2024-07-24 09:42:20.578695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.578705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.835 [2024-07-24 09:42:20.578718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:42.835 [2024-07-24 09:42:20.578727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.578816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.835 [2024-07-24 09:42:20.578826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.835 [2024-07-24 09:42:20.578854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:42.835 [2024-07-24 09:42:20.578863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.835 [2024-07-24 09:42:20.578996] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.835 [2024-07-24 09:42:20.579015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.835 [2024-07-24 09:42:20.579028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.835 [2024-07-24 09:42:20.579060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.835 [2024-07-24 09:42:20.579093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.835 [2024-07-24 09:42:20.579114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.835 [2024-07-24 09:42:20.579123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:42.835 [2024-07-24 09:42:20.579135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.835 [2024-07-24 09:42:20.579144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.835 [2024-07-24 09:42:20.579158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:42.835 [2024-07-24 09:42:20.579167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.835 [2024-07-24 09:42:20.579199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.835 [2024-07-24 09:42:20.579233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.835 [2024-07-24 09:42:20.579263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.835 [2024-07-24 09:42:20.579309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:42.835 [2024-07-24 09:42:20.579318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.835 [2024-07-24 09:42:20.579330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.835 [2024-07-24 09:42:20.579339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:42.836 [2024-07-24 09:42:20.579356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.836 [2024-07-24 09:42:20.579365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.836 [2024-07-24 09:42:20.579377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:42.836 [2024-07-24 09:42:20.579386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.836 [2024-07-24 09:42:20.579398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.836 [2024-07-24 09:42:20.579407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:42.836 [2024-07-24 09:42:20.579418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.836 [2024-07-24 09:42:20.579427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.836 [2024-07-24 09:42:20.579438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:42.836 [2024-07-24 09:42:20.579447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.836 [2024-07-24 09:42:20.579459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.836 [2024-07-24 09:42:20.579468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:42.836 [2024-07-24 09:42:20.579481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.836 [2024-07-24 09:42:20.579489] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.836 [2024-07-24 09:42:20.579519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.836 [2024-07-24 09:42:20.579542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.836 [2024-07-24 09:42:20.579558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.836 [2024-07-24 09:42:20.579568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.836 [2024-07-24 09:42:20.579580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.836 [2024-07-24 09:42:20.579589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.836 [2024-07-24 09:42:20.579601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.836 [2024-07-24 09:42:20.579610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.836 [2024-07-24 09:42:20.579622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.836 [2024-07-24 09:42:20.579641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.836 [2024-07-24 09:42:20.579656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:42.836 [2024-07-24 09:42:20.579681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:42.836 [2024-07-24 09:42:20.579692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:42.836 [2024-07-24 09:42:20.579704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:42.836 [2024-07-24 09:42:20.579715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:42.836 [2024-07-24 09:42:20.579727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:42.836 [2024-07-24 09:42:20.579738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:42.836 [2024-07-24 09:42:20.579756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:42.836 [2024-07-24 09:42:20.579766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:42.836 [2024-07-24 09:42:20.579780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:42.836 [2024-07-24 09:42:20.579840] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.836 [2024-07-24 09:42:20.579867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.836 [2024-07-24 09:42:20.579903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.836 [2024-07-24 09:42:20.579914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.836 [2024-07-24 09:42:20.579927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.836 [2024-07-24 09:42:20.579938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.836 [2024-07-24 09:42:20.579951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.836 [2024-07-24 09:42:20.579960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:16:42.836 [2024-07-24 09:42:20.579976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.836 [2024-07-24 09:42:20.580046] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:42.836 [2024-07-24 09:42:20.580060] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:45.401 [2024-07-24 09:42:22.907812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.907910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:45.401 [2024-07-24 09:42:22.907929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2331.541 ms 00:16:45.401 [2024-07-24 09:42:22.907945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.928586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.928676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.401 [2024-07-24 09:42:22.928709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.521 ms 00:16:45.401 [2024-07-24 09:42:22.928724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.928847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.928868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.401 [2024-07-24 09:42:22.928879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:45.401 [2024-07-24 09:42:22.928893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.958041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.958100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.401 [2024-07-24 09:42:22.958119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.092 ms 00:16:45.401 [2024-07-24 09:42:22.958137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.958230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.958253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.401 [2024-07-24 09:42:22.958268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:45.401 [2024-07-24 09:42:22.958285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.959228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.959267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.401 [2024-07-24 09:42:22.959283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:16:45.401 [2024-07-24 09:42:22.959300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.959475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.959501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.401 [2024-07-24 09:42:22.959517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:16:45.401 [2024-07-24 09:42:22.959534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.972685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:22.972727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.401 [2024-07-24 09:42:22.972746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.128 ms 00:16:45.401 [2024-07-24 09:42:22.972760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:22.982363] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.401 [2024-07-24 09:42:23.010326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.010364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.401 [2024-07-24 09:42:23.010383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.477 ms 00:16:45.401 [2024-07-24 09:42:23.010399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.059983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.060025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:45.401 [2024-07-24 09:42:23.060058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.584 ms 00:16:45.401 [2024-07-24 09:42:23.060071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.060336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.060352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.401 [2024-07-24 09:42:23.060369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:45.401 [2024-07-24 09:42:23.060379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.064066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.064101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:45.401 [2024-07-24 09:42:23.064118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:16:45.401 [2024-07-24 09:42:23.064130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.066946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.066980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:45.401 [2024-07-24 09:42:23.066997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:16:45.401 [2024-07-24 09:42:23.067007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.067349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.067367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.401 [2024-07-24 09:42:23.067382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:45.401 [2024-07-24 09:42:23.067393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.112402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.401 [2024-07-24 09:42:23.112438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:45.401 [2024-07-24 09:42:23.112456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.021 ms 00:16:45.401 [2024-07-24 09:42:23.112467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.401 [2024-07-24 09:42:23.118350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.402 [2024-07-24 09:42:23.118385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:45.402 [2024-07-24 09:42:23.118402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.815 ms 00:16:45.402 [2024-07-24 09:42:23.118413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.402 [2024-07-24 09:42:23.121789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.402 [2024-07-24 09:42:23.121821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:45.402 [2024-07-24 09:42:23.121837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.319 ms 00:16:45.402 [2024-07-24 09:42:23.121847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.402 [2024-07-24 09:42:23.125933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.402 [2024-07-24 09:42:23.125967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.402 [2024-07-24 09:42:23.125983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:16:45.402 [2024-07-24 09:42:23.125993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.402 [2024-07-24 09:42:23.126062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.402 [2024-07-24 09:42:23.126075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.402 [2024-07-24 09:42:23.126109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:45.402 [2024-07-24 09:42:23.126120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.402 [2024-07-24 09:42:23.126261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.402 [2024-07-24 09:42:23.126275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.402 [2024-07-24 09:42:23.126289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:45.402 [2024-07-24 09:42:23.126300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.402 [2024-07-24 09:42:23.127983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2565.467 ms, result 0 00:16:45.402 { 00:16:45.402 "name": "ftl0", 00:16:45.402 "uuid": "aca6e3d2-fc75-447f-af6f-4077a564eb06" 00:16:45.402 } 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:45.402 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.659 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:45.918 [ 00:16:45.918 { 00:16:45.918 "name": "ftl0", 00:16:45.918 "aliases": [ 00:16:45.918 "aca6e3d2-fc75-447f-af6f-4077a564eb06" 00:16:45.918 ], 00:16:45.918 "product_name": "FTL disk", 00:16:45.918 "block_size": 4096, 00:16:45.918 "num_blocks": 20971520, 00:16:45.918 "uuid": "aca6e3d2-fc75-447f-af6f-4077a564eb06", 00:16:45.918 "assigned_rate_limits": { 00:16:45.918 "rw_ios_per_sec": 0, 00:16:45.918 "rw_mbytes_per_sec": 0, 00:16:45.918 "r_mbytes_per_sec": 0, 00:16:45.918 "w_mbytes_per_sec": 0 00:16:45.918 }, 00:16:45.918 "claimed": false, 00:16:45.918 "zoned": false, 00:16:45.918 "supported_io_types": { 00:16:45.918 "read": true, 00:16:45.918 "write": true, 00:16:45.918 "unmap": true, 00:16:45.918 "flush": true, 00:16:45.918 "reset": false, 00:16:45.918 "nvme_admin": false, 00:16:45.918 "nvme_io": false, 00:16:45.918 "nvme_io_md": false, 00:16:45.918 "write_zeroes": true, 00:16:45.918 "zcopy": false, 00:16:45.918 "get_zone_info": false, 00:16:45.918 "zone_management": false, 00:16:45.918 "zone_append": false, 00:16:45.918 "compare": false, 00:16:45.918 "compare_and_write": false, 00:16:45.918 "abort": false, 00:16:45.918 "seek_hole": false, 00:16:45.918 "seek_data": false, 00:16:45.918 "copy": false, 00:16:45.918 "nvme_iov_md": false 00:16:45.918 }, 00:16:45.918 "driver_specific": { 00:16:45.918 "ftl": { 00:16:45.918 "base_bdev": "eda1015d-5594-4795-8aed-495dfc76231d", 00:16:45.918 "cache": "nvc0n1p0" 00:16:45.918 } 00:16:45.918 } 00:16:45.918 } 00:16:45.918 ] 00:16:45.918 09:42:23 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:45.918 09:42:23 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:45.918 09:42:23 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:46.178 09:42:23 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:46.178 09:42:23 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:46.178 [2024-07-24 09:42:23.914082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.914157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:46.178 [2024-07-24 09:42:23.914174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:46.178 [2024-07-24 09:42:23.914202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.914258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.178 [2024-07-24 09:42:23.915535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.915563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.178 [2024-07-24 09:42:23.915584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:16:46.178 [2024-07-24 09:42:23.915596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.916243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.916284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.178 [2024-07-24 09:42:23.916300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:16:46.178 [2024-07-24 09:42:23.916312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.918883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.918905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.178 [2024-07-24 09:42:23.918924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:16:46.178 [2024-07-24 09:42:23.918935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.924038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.924090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:46.178 [2024-07-24 09:42:23.924106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:16:46.178 [2024-07-24 09:42:23.924117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.925991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.926029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.178 [2024-07-24 09:42:23.926050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:16:46.178 [2024-07-24 09:42:23.926061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.932151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.932204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.178 [2024-07-24 09:42:23.932221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.044 ms 00:16:46.178 [2024-07-24 09:42:23.932232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.932452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.932467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.178 [2024-07-24 09:42:23.932482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:46.178 [2024-07-24 09:42:23.932492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.934992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.935026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:46.178 [2024-07-24 09:42:23.935041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:16:46.178 [2024-07-24 09:42:23.935051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.936771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.936804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:46.178 [2024-07-24 09:42:23.936822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:16:46.178 [2024-07-24 09:42:23.936832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.938179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.938224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:46.178 [2024-07-24 09:42:23.938240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:16:46.178 [2024-07-24 09:42:23.938250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.939559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.178 [2024-07-24 09:42:23.939590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:46.178 [2024-07-24 09:42:23.939604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:16:46.178 [2024-07-24 09:42:23.939614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.178 [2024-07-24 09:42:23.939666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:46.178 [2024-07-24 09:42:23.939685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:46.178 [2024-07-24 09:42:23.939978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.939992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.940989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.941001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.941017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:46.179 [2024-07-24 09:42:23.941035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:46.179 [2024-07-24 09:42:23.941052] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aca6e3d2-fc75-447f-af6f-4077a564eb06 00:16:46.179 [2024-07-24 09:42:23.941064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:46.179 [2024-07-24 09:42:23.941077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:46.179 [2024-07-24 09:42:23.941087] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:46.179 [2024-07-24 09:42:23.941104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:46.179 [2024-07-24 09:42:23.941114] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:46.179 [2024-07-24 09:42:23.941128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:46.179 [2024-07-24 09:42:23.941138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:46.179 [2024-07-24 09:42:23.941150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:46.179 [2024-07-24 09:42:23.941167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:46.180 [2024-07-24 09:42:23.941180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.180 [2024-07-24 09:42:23.941203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:46.180 [2024-07-24 09:42:23.941217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:16:46.180 [2024-07-24 09:42:23.941231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.944259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.180 [2024-07-24 09:42:23.944284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:46.180 [2024-07-24 09:42:23.944303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:16:46.180 [2024-07-24 09:42:23.944313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.944522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.180 [2024-07-24 09:42:23.944534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:46.180 [2024-07-24 09:42:23.944552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:16:46.180 [2024-07-24 09:42:23.944562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.956514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.180 [2024-07-24 09:42:23.956545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.180 [2024-07-24 09:42:23.956562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.180 [2024-07-24 09:42:23.956573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.956658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.180 [2024-07-24 09:42:23.956671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.180 [2024-07-24 09:42:23.956689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.180 [2024-07-24 09:42:23.956699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.956822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.180 [2024-07-24 09:42:23.956837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.180 [2024-07-24 09:42:23.956854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.180 [2024-07-24 09:42:23.956865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.956939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.180 [2024-07-24 09:42:23.956951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.180 [2024-07-24 09:42:23.956965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.180 [2024-07-24 09:42:23.956978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.180 [2024-07-24 09:42:23.982006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.180 [2024-07-24 09:42:23.982052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.180 [2024-07-24 09:42:23.982071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.180 [2024-07-24 09:42:23.982082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.438 [2024-07-24 09:42:23.996119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.438 [2024-07-24 09:42:23.996296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.438 [2024-07-24 09:42:23.996440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.438 [2024-07-24 09:42:23.996606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:46.438 [2024-07-24 09:42:23.996743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.438 [2024-07-24 09:42:23.996858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.996947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.438 [2024-07-24 09:42:23.996959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.438 [2024-07-24 09:42:23.996973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.438 [2024-07-24 09:42:23.996983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.438 [2024-07-24 09:42:23.997244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.225 ms, result 0 00:16:46.438 true 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88845 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 88845 ']' 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 88845 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88845 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88845' 00:16:46.438 killing process with pid 88845 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 88845 00:16:46.438 09:42:24 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 88845 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:49.716 09:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:49.716 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:49.716 fio-3.35 00:16:49.716 Starting 1 thread 00:16:54.999 00:16:54.999 test: (groupid=0, jobs=1): err= 0: pid=89013: Wed Jul 24 09:42:32 2024 00:16:54.999 read: IOPS=925, BW=61.5MiB/s (64.4MB/s)(255MiB/4142msec) 00:16:54.999 slat (nsec): min=4473, max=26070, avg=6223.40, stdev=2290.66 00:16:54.999 clat (usec): min=307, max=2520, avg=483.98, stdev=61.25 00:16:54.999 lat (usec): min=312, max=2526, avg=490.21, stdev=61.49 00:16:54.999 clat percentiles (usec): 00:16:54.999 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 424], 20.00th=[ 445], 00:16:54.999 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 498], 60.00th=[ 506], 00:16:54.999 | 70.00th=[ 515], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 562], 00:16:54.999 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 996], 00:16:54.999 | 99.99th=[ 2507] 00:16:54.999 write: IOPS=932, BW=61.9MiB/s (64.9MB/s)(256MiB/4137msec); 0 zone resets 00:16:54.999 slat (usec): min=15, max=132, avg=23.30, stdev= 6.98 00:16:54.999 clat (usec): min=354, max=1032, avg=549.54, stdev=74.95 00:16:54.999 lat (usec): min=378, max=1067, avg=572.84, stdev=76.48 00:16:54.999 clat percentiles (usec): 00:16:54.999 | 1.00th=[ 400], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 486], 00:16:54.999 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 570], 00:16:54.999 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 627], 00:16:54.999 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1012], 00:16:54.999 | 99.99th=[ 1037] 00:16:54.999 bw ( KiB/s): min=59160, max=67048, per=100.00%, avg=63393.00, stdev=3085.42, samples=8 00:16:54.999 iops : min= 870, max= 986, avg=932.25, stdev=45.37, samples=8 00:16:54.999 lat (usec) : 500=36.62%, 750=62.48%, 1000=0.83% 00:16:54.999 lat (msec) : 2=0.05%, 4=0.01% 00:16:54.999 cpu : usr=99.32%, sys=0.02%, ctx=6, majf=0, minf=1181 00:16:54.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.999 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.999 00:16:54.999 Run status group 0 (all jobs): 00:16:54.999 READ: bw=61.5MiB/s (64.4MB/s), 61.5MiB/s-61.5MiB/s (64.4MB/s-64.4MB/s), io=255MiB (267MB), run=4142-4142msec 00:16:54.999 WRITE: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=256MiB (269MB), run=4137-4137msec 00:16:55.258 ----------------------------------------------------- 00:16:55.258 Suppressions used: 00:16:55.258 count bytes template 00:16:55.258 1 5 /usr/src/fio/parse.c 00:16:55.258 1 8 libtcmalloc_minimal.so 00:16:55.258 1 904 libcrypto.so 00:16:55.258 ----------------------------------------------------- 00:16:55.258 00:16:55.258 09:42:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:55.258 09:42:32 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.258 09:42:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.258 09:42:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:55.258 09:42:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:55.258 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.258 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.258 09:42:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:55.259 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.518 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:55.518 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:55.518 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:55.518 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:55.518 09:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.518 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.518 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.518 fio-3.35 00:16:55.518 Starting 2 threads 00:17:22.064 00:17:22.064 first_half: (groupid=0, jobs=1): err= 0: pid=89103: Wed Jul 24 09:42:58 2024 00:17:22.064 read: IOPS=2661, BW=10.4MiB/s (10.9MB/s)(255MiB/24534msec) 00:17:22.064 slat (nsec): min=3418, max=59469, avg=6222.25, stdev=2439.34 00:17:22.064 clat (usec): min=1024, max=260677, avg=38118.70, stdev=17614.10 00:17:22.064 lat (usec): min=1032, max=260684, avg=38124.92, stdev=17614.27 00:17:22.064 clat percentiles (msec): 00:17:22.064 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:17:22.064 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:17:22.064 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 44], 95.00th=[ 55], 00:17:22.064 | 99.00th=[ 138], 99.50th=[ 163], 99.90th=[ 186], 99.95th=[ 209], 00:17:22.064 | 99.99th=[ 251] 00:17:22.064 write: IOPS=3247, BW=12.7MiB/s (13.3MB/s)(256MiB/20180msec); 0 zone resets 00:17:22.064 slat (usec): min=4, max=492, avg= 7.80, stdev= 6.31 00:17:22.064 clat (usec): min=398, max=89358, avg=9910.50, stdev=16465.55 00:17:22.064 lat (usec): min=405, max=89365, avg=9918.30, stdev=16465.67 00:17:22.064 clat percentiles (usec): 00:17:22.064 | 1.00th=[ 971], 5.00th=[ 1205], 10.00th=[ 1401], 20.00th=[ 1795], 00:17:22.064 | 30.00th=[ 2737], 40.00th=[ 4228], 50.00th=[ 5145], 60.00th=[ 6194], 00:17:22.064 | 70.00th=[ 7373], 80.00th=[11076], 90.00th=[14091], 95.00th=[39584], 00:17:22.064 | 99.00th=[81265], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:17:22.064 | 99.99th=[87557] 00:17:22.064 bw ( KiB/s): min= 528, max=46640, per=96.09%, avg=24966.10, stdev=12787.33, samples=21 00:17:22.064 iops : min= 132, max=11660, avg=6241.52, stdev=3196.83, samples=21 00:17:22.064 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.56% 00:17:22.064 lat (msec) : 2=11.64%, 4=7.23%, 10=19.28%, 20=7.58%, 50=47.87% 00:17:22.064 lat (msec) : 100=4.75%, 250=1.01%, 500=0.01% 00:17:22.064 cpu : usr=99.22%, sys=0.18%, ctx=40, majf=0, minf=5607 00:17:22.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:22.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.064 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.064 issued rwts: total=65303,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.064 second_half: (groupid=0, jobs=1): err= 0: pid=89104: Wed Jul 24 09:42:58 2024 00:17:22.064 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(255MiB/24630msec) 00:17:22.064 slat (nsec): min=3471, max=38084, avg=6150.65, stdev=2288.69 00:17:22.064 clat (usec): min=880, max=266065, avg=37577.72, stdev=19112.60 00:17:22.064 lat (usec): min=889, max=266072, avg=37583.87, stdev=19112.89 00:17:22.064 clat percentiles (msec): 00:17:22.064 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:17:22.064 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:17:22.064 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 51], 00:17:22.064 | 99.00th=[ 148], 99.50th=[ 165], 99.90th=[ 190], 99.95th=[ 224], 00:17:22.064 | 99.99th=[ 259] 00:17:22.064 write: IOPS=3440, BW=13.4MiB/s (14.1MB/s)(256MiB/19047msec); 0 zone resets 00:17:22.064 slat (usec): min=4, max=857, avg= 7.95, stdev= 5.67 00:17:22.064 clat (usec): min=437, max=89662, avg=10633.05, stdev=17378.10 00:17:22.064 lat (usec): min=446, max=89668, avg=10640.99, stdev=17378.25 00:17:22.064 clat percentiles (usec): 00:17:22.064 | 1.00th=[ 971], 5.00th=[ 1221], 10.00th=[ 1434], 20.00th=[ 1729], 00:17:22.064 | 30.00th=[ 2147], 40.00th=[ 3818], 50.00th=[ 5211], 60.00th=[ 6259], 00:17:22.064 | 70.00th=[ 7570], 80.00th=[11469], 90.00th=[28443], 95.00th=[51643], 00:17:22.065 | 99.00th=[82314], 99.50th=[83362], 99.90th=[85459], 99.95th=[86508], 00:17:22.065 | 99.99th=[88605] 00:17:22.065 bw ( KiB/s): min= 1080, max=55848, per=91.74%, avg=23834.95, stdev=17595.23, samples=22 00:17:22.065 iops : min= 270, max=13962, avg=5958.73, stdev=4398.79, samples=22 00:17:22.065 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.57% 00:17:22.065 lat (msec) : 2=13.32%, 4=6.65%, 10=18.50%, 20=6.71%, 50=49.08% 00:17:22.065 lat (msec) : 100=3.81%, 250=1.26%, 500=0.01% 00:17:22.065 cpu : usr=99.19%, sys=0.22%, ctx=65, majf=0, minf=5531 00:17:22.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:22.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.065 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.065 issued rwts: total=65295,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.065 00:17:22.065 Run status group 0 (all jobs): 00:17:22.065 READ: bw=20.7MiB/s (21.7MB/s), 10.4MiB/s-10.4MiB/s (10.9MB/s-10.9MB/s), io=510MiB (535MB), run=24534-24630msec 00:17:22.065 WRITE: bw=25.4MiB/s (26.6MB/s), 12.7MiB/s-13.4MiB/s (13.3MB/s-14.1MB/s), io=512MiB (537MB), run=19047-20180msec 00:17:22.065 ----------------------------------------------------- 00:17:22.065 Suppressions used: 00:17:22.065 count bytes template 00:17:22.065 2 10 /usr/src/fio/parse.c 00:17:22.065 5 480 /usr/src/fio/iolog.c 00:17:22.065 1 8 libtcmalloc_minimal.so 00:17:22.065 1 904 libcrypto.so 00:17:22.065 ----------------------------------------------------- 00:17:22.065 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.065 09:42:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.324 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:22.324 fio-3.35 00:17:22.324 Starting 1 thread 00:17:37.205 00:17:37.205 test: (groupid=0, jobs=1): err= 0: pid=89420: Wed Jul 24 09:43:13 2024 00:17:37.205 read: IOPS=7598, BW=29.7MiB/s (31.1MB/s)(255MiB/8581msec) 00:17:37.205 slat (usec): min=3, max=365, avg= 5.62, stdev= 3.01 00:17:37.205 clat (usec): min=561, max=35402, avg=16836.74, stdev=1648.44 00:17:37.205 lat (usec): min=565, max=35410, avg=16842.36, stdev=1649.08 00:17:37.205 clat percentiles (usec): 00:17:37.205 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:17:37.205 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16909], 60.00th=[17433], 00:17:37.205 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19792], 00:17:37.205 | 99.00th=[20579], 99.50th=[20841], 99.90th=[26084], 99.95th=[30540], 00:17:37.205 | 99.99th=[34341] 00:17:37.205 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(256MiB/4485msec); 0 zone resets 00:17:37.205 slat (usec): min=4, max=1565, avg= 7.85, stdev= 9.26 00:17:37.205 clat (usec): min=582, max=48986, avg=8713.59, stdev=10493.52 00:17:37.205 lat (usec): min=587, max=48996, avg=8721.44, stdev=10493.53 00:17:37.205 clat percentiles (usec): 00:17:37.205 | 1.00th=[ 865], 5.00th=[ 1012], 10.00th=[ 1123], 20.00th=[ 1319], 00:17:37.205 | 30.00th=[ 1483], 40.00th=[ 1795], 50.00th=[ 5735], 60.00th=[ 6652], 00:17:37.205 | 70.00th=[ 7767], 80.00th=[10028], 90.00th=[31327], 95.00th=[32900], 00:17:37.205 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36439], 99.95th=[40109], 00:17:37.205 | 99.99th=[45876] 00:17:37.205 bw ( KiB/s): min=49896, max=80704, per=99.67%, avg=58254.22, stdev=10454.03, samples=9 00:17:37.205 iops : min=12474, max=20176, avg=14563.56, stdev=2613.51, samples=9 00:17:37.205 lat (usec) : 750=0.09%, 1000=2.26% 00:17:37.205 lat (msec) : 2=18.28%, 4=0.53%, 10=19.00%, 20=49.68%, 50=10.15% 00:17:37.205 cpu : usr=98.18%, sys=0.62%, ctx=34, majf=0, minf=5577 00:17:37.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:37.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.205 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.205 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.205 00:17:37.205 Run status group 0 (all jobs): 00:17:37.205 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=255MiB (267MB), run=8581-8581msec 00:17:37.205 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=256MiB (268MB), run=4485-4485msec 00:17:37.205 ----------------------------------------------------- 00:17:37.205 Suppressions used: 00:17:37.205 count bytes template 00:17:37.205 1 5 /usr/src/fio/parse.c 00:17:37.205 2 192 /usr/src/fio/iolog.c 00:17:37.205 1 8 libtcmalloc_minimal.so 00:17:37.205 1 904 libcrypto.so 00:17:37.205 ----------------------------------------------------- 00:17:37.205 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.205 Remove shared memory files 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid74406 /dev/shm/spdk_tgt_trace.pid87808 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:37.205 00:17:37.205 real 0m57.847s 00:17:37.205 user 2m8.576s 00:17:37.205 sys 0m3.620s 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.205 ************************************ 00:17:37.205 END TEST ftl_fio_basic 00:17:37.205 ************************************ 00:17:37.205 09:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.205 09:43:14 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.205 09:43:14 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:37.205 09:43:14 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.205 09:43:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:37.205 ************************************ 00:17:37.205 START TEST ftl_bdevperf 00:17:37.205 ************************************ 00:17:37.205 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.206 * Looking for test storage... 00:17:37.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=89641 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 89641 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 89641 ']' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.206 09:43:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.465 [2024-07-24 09:43:15.036244] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:17:37.465 [2024-07-24 09:43:15.036383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89641 ] 00:17:37.465 [2024-07-24 09:43:15.201004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.465 [2024-07-24 09:43:15.245091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:38.032 09:43:15 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:38.308 09:43:15 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:38.308 09:43:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:38.308 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:38.308 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:38.571 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:38.571 { 00:17:38.571 "name": "nvme0n1", 00:17:38.571 "aliases": [ 00:17:38.571 "a1ad830c-0e3c-47aa-8f47-a397eda55802" 00:17:38.571 ], 00:17:38.571 "product_name": "NVMe disk", 00:17:38.571 "block_size": 4096, 00:17:38.571 "num_blocks": 1310720, 00:17:38.571 "uuid": "a1ad830c-0e3c-47aa-8f47-a397eda55802", 00:17:38.571 "assigned_rate_limits": { 00:17:38.571 "rw_ios_per_sec": 0, 00:17:38.571 "rw_mbytes_per_sec": 0, 00:17:38.571 "r_mbytes_per_sec": 0, 00:17:38.571 "w_mbytes_per_sec": 0 00:17:38.571 }, 00:17:38.571 "claimed": true, 00:17:38.571 "claim_type": "read_many_write_one", 00:17:38.571 "zoned": false, 00:17:38.571 "supported_io_types": { 00:17:38.571 "read": true, 00:17:38.571 "write": true, 00:17:38.571 "unmap": true, 00:17:38.572 "flush": true, 00:17:38.572 "reset": true, 00:17:38.572 "nvme_admin": true, 00:17:38.572 "nvme_io": true, 00:17:38.572 "nvme_io_md": false, 00:17:38.572 "write_zeroes": true, 00:17:38.572 "zcopy": false, 00:17:38.572 "get_zone_info": false, 00:17:38.572 "zone_management": false, 00:17:38.572 "zone_append": false, 00:17:38.572 "compare": true, 00:17:38.572 "compare_and_write": false, 00:17:38.572 "abort": true, 00:17:38.572 "seek_hole": false, 00:17:38.572 "seek_data": false, 00:17:38.572 "copy": true, 00:17:38.572 "nvme_iov_md": false 00:17:38.572 }, 00:17:38.572 "driver_specific": { 00:17:38.572 "nvme": [ 00:17:38.572 { 00:17:38.572 "pci_address": "0000:00:11.0", 00:17:38.572 "trid": { 00:17:38.572 "trtype": "PCIe", 00:17:38.572 "traddr": "0000:00:11.0" 00:17:38.572 }, 00:17:38.572 "ctrlr_data": { 00:17:38.572 "cntlid": 0, 00:17:38.572 "vendor_id": "0x1b36", 00:17:38.572 "model_number": "QEMU NVMe Ctrl", 00:17:38.572 "serial_number": "12341", 00:17:38.572 "firmware_revision": "8.0.0", 00:17:38.572 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:38.572 "oacs": { 00:17:38.572 "security": 0, 00:17:38.572 "format": 1, 00:17:38.572 "firmware": 0, 00:17:38.572 "ns_manage": 1 00:17:38.572 }, 00:17:38.572 "multi_ctrlr": false, 00:17:38.572 "ana_reporting": false 00:17:38.572 }, 00:17:38.572 "vs": { 00:17:38.572 "nvme_version": "1.4" 00:17:38.572 }, 00:17:38.572 "ns_data": { 00:17:38.572 "id": 1, 00:17:38.572 "can_share": false 00:17:38.572 } 00:17:38.572 } 00:17:38.572 ], 00:17:38.572 "mp_policy": "active_passive" 00:17:38.572 } 00:17:38.572 } 00:17:38.572 ]' 00:17:38.572 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:38.572 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:38.572 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=773e420f-013b-433e-8730-2c19838290b2 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:38.831 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 773e420f-013b-433e-8730-2c19838290b2 00:17:39.090 09:43:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:39.349 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bfdac264-9da7-48f6-a815-b897ec447c2e 00:17:39.349 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bfdac264-9da7-48f6-a815-b897ec447c2e 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:39.607 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:39.865 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:39.865 { 00:17:39.865 "name": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:39.865 "aliases": [ 00:17:39.866 "lvs/nvme0n1p0" 00:17:39.866 ], 00:17:39.866 "product_name": "Logical Volume", 00:17:39.866 "block_size": 4096, 00:17:39.866 "num_blocks": 26476544, 00:17:39.866 "uuid": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:39.866 "assigned_rate_limits": { 00:17:39.866 "rw_ios_per_sec": 0, 00:17:39.866 "rw_mbytes_per_sec": 0, 00:17:39.866 "r_mbytes_per_sec": 0, 00:17:39.866 "w_mbytes_per_sec": 0 00:17:39.866 }, 00:17:39.866 "claimed": false, 00:17:39.866 "zoned": false, 00:17:39.866 "supported_io_types": { 00:17:39.866 "read": true, 00:17:39.866 "write": true, 00:17:39.866 "unmap": true, 00:17:39.866 "flush": false, 00:17:39.866 "reset": true, 00:17:39.866 "nvme_admin": false, 00:17:39.866 "nvme_io": false, 00:17:39.866 "nvme_io_md": false, 00:17:39.866 "write_zeroes": true, 00:17:39.866 "zcopy": false, 00:17:39.866 "get_zone_info": false, 00:17:39.866 "zone_management": false, 00:17:39.866 "zone_append": false, 00:17:39.866 "compare": false, 00:17:39.866 "compare_and_write": false, 00:17:39.866 "abort": false, 00:17:39.866 "seek_hole": true, 00:17:39.866 "seek_data": true, 00:17:39.866 "copy": false, 00:17:39.866 "nvme_iov_md": false 00:17:39.866 }, 00:17:39.866 "driver_specific": { 00:17:39.866 "lvol": { 00:17:39.866 "lvol_store_uuid": "bfdac264-9da7-48f6-a815-b897ec447c2e", 00:17:39.866 "base_bdev": "nvme0n1", 00:17:39.866 "thin_provision": true, 00:17:39.866 "num_allocated_clusters": 0, 00:17:39.866 "snapshot": false, 00:17:39.866 "clone": false, 00:17:39.866 "esnap_clone": false 00:17:39.866 } 00:17:39.866 } 00:17:39.866 } 00:17:39.866 ]' 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:39.866 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:40.124 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:40.125 09:43:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.383 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:40.383 { 00:17:40.383 "name": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:40.383 "aliases": [ 00:17:40.383 "lvs/nvme0n1p0" 00:17:40.383 ], 00:17:40.383 "product_name": "Logical Volume", 00:17:40.384 "block_size": 4096, 00:17:40.384 "num_blocks": 26476544, 00:17:40.384 "uuid": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:40.384 "assigned_rate_limits": { 00:17:40.384 "rw_ios_per_sec": 0, 00:17:40.384 "rw_mbytes_per_sec": 0, 00:17:40.384 "r_mbytes_per_sec": 0, 00:17:40.384 "w_mbytes_per_sec": 0 00:17:40.384 }, 00:17:40.384 "claimed": false, 00:17:40.384 "zoned": false, 00:17:40.384 "supported_io_types": { 00:17:40.384 "read": true, 00:17:40.384 "write": true, 00:17:40.384 "unmap": true, 00:17:40.384 "flush": false, 00:17:40.384 "reset": true, 00:17:40.384 "nvme_admin": false, 00:17:40.384 "nvme_io": false, 00:17:40.384 "nvme_io_md": false, 00:17:40.384 "write_zeroes": true, 00:17:40.384 "zcopy": false, 00:17:40.384 "get_zone_info": false, 00:17:40.384 "zone_management": false, 00:17:40.384 "zone_append": false, 00:17:40.384 "compare": false, 00:17:40.384 "compare_and_write": false, 00:17:40.384 "abort": false, 00:17:40.384 "seek_hole": true, 00:17:40.384 "seek_data": true, 00:17:40.384 "copy": false, 00:17:40.384 "nvme_iov_md": false 00:17:40.384 }, 00:17:40.384 "driver_specific": { 00:17:40.384 "lvol": { 00:17:40.384 "lvol_store_uuid": "bfdac264-9da7-48f6-a815-b897ec447c2e", 00:17:40.384 "base_bdev": "nvme0n1", 00:17:40.384 "thin_provision": true, 00:17:40.384 "num_allocated_clusters": 0, 00:17:40.384 "snapshot": false, 00:17:40.384 "clone": false, 00:17:40.384 "esnap_clone": false 00:17:40.384 } 00:17:40.384 } 00:17:40.384 } 00:17:40.384 ]' 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:40.384 09:43:18 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:40.643 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b99fbd-af35-4304-a77b-4e34c877f620 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:40.902 { 00:17:40.902 "name": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:40.902 "aliases": [ 00:17:40.902 "lvs/nvme0n1p0" 00:17:40.902 ], 00:17:40.902 "product_name": "Logical Volume", 00:17:40.902 "block_size": 4096, 00:17:40.902 "num_blocks": 26476544, 00:17:40.902 "uuid": "02b99fbd-af35-4304-a77b-4e34c877f620", 00:17:40.902 "assigned_rate_limits": { 00:17:40.902 "rw_ios_per_sec": 0, 00:17:40.902 "rw_mbytes_per_sec": 0, 00:17:40.902 "r_mbytes_per_sec": 0, 00:17:40.902 "w_mbytes_per_sec": 0 00:17:40.902 }, 00:17:40.902 "claimed": false, 00:17:40.902 "zoned": false, 00:17:40.902 "supported_io_types": { 00:17:40.902 "read": true, 00:17:40.902 "write": true, 00:17:40.902 "unmap": true, 00:17:40.902 "flush": false, 00:17:40.902 "reset": true, 00:17:40.902 "nvme_admin": false, 00:17:40.902 "nvme_io": false, 00:17:40.902 "nvme_io_md": false, 00:17:40.902 "write_zeroes": true, 00:17:40.902 "zcopy": false, 00:17:40.902 "get_zone_info": false, 00:17:40.902 "zone_management": false, 00:17:40.902 "zone_append": false, 00:17:40.902 "compare": false, 00:17:40.902 "compare_and_write": false, 00:17:40.902 "abort": false, 00:17:40.902 "seek_hole": true, 00:17:40.902 "seek_data": true, 00:17:40.902 "copy": false, 00:17:40.902 "nvme_iov_md": false 00:17:40.902 }, 00:17:40.902 "driver_specific": { 00:17:40.902 "lvol": { 00:17:40.902 "lvol_store_uuid": "bfdac264-9da7-48f6-a815-b897ec447c2e", 00:17:40.902 "base_bdev": "nvme0n1", 00:17:40.902 "thin_provision": true, 00:17:40.902 "num_allocated_clusters": 0, 00:17:40.902 "snapshot": false, 00:17:40.902 "clone": false, 00:17:40.902 "esnap_clone": false 00:17:40.902 } 00:17:40.902 } 00:17:40.902 } 00:17:40.902 ]' 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:40.902 09:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 02b99fbd-af35-4304-a77b-4e34c877f620 -c nvc0n1p0 --l2p_dram_limit 20 00:17:41.162 [2024-07-24 09:43:18.745478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.745542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.162 [2024-07-24 09:43:18.745558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.162 [2024-07-24 09:43:18.745572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.745638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.745657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.162 [2024-07-24 09:43:18.745668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:41.162 [2024-07-24 09:43:18.745684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.745703] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.162 [2024-07-24 09:43:18.745981] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.162 [2024-07-24 09:43:18.746000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.746013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.162 [2024-07-24 09:43:18.746024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:41.162 [2024-07-24 09:43:18.746037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.746079] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9ccca48c-0053-43dd-8273-fd29a37e424b 00:17:41.162 [2024-07-24 09:43:18.747512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.747547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.162 [2024-07-24 09:43:18.747562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:41.162 [2024-07-24 09:43:18.747582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.754982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.755014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.162 [2024-07-24 09:43:18.755029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.341 ms 00:17:41.162 [2024-07-24 09:43:18.755042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.755129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.755141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.162 [2024-07-24 09:43:18.755154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:41.162 [2024-07-24 09:43:18.755164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.755227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.755239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.162 [2024-07-24 09:43:18.755253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.162 [2024-07-24 09:43:18.755262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.755299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.162 [2024-07-24 09:43:18.757078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.757110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.162 [2024-07-24 09:43:18.757122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:17:41.162 [2024-07-24 09:43:18.757144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.757177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.757200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.162 [2024-07-24 09:43:18.757211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:41.162 [2024-07-24 09:43:18.757237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.162 [2024-07-24 09:43:18.757263] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.162 [2024-07-24 09:43:18.757402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.162 [2024-07-24 09:43:18.757418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.162 [2024-07-24 09:43:18.757436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:41.162 [2024-07-24 09:43:18.757449] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.162 [2024-07-24 09:43:18.757463] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.162 [2024-07-24 09:43:18.757478] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.162 [2024-07-24 09:43:18.757490] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.162 [2024-07-24 09:43:18.757500] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.162 [2024-07-24 09:43:18.757519] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.162 [2024-07-24 09:43:18.757529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.162 [2024-07-24 09:43:18.757542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.163 [2024-07-24 09:43:18.757558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:41.163 [2024-07-24 09:43:18.757571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.163 [2024-07-24 09:43:18.757639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.163 [2024-07-24 09:43:18.757655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.163 [2024-07-24 09:43:18.757668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:41.163 [2024-07-24 09:43:18.757688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.163 [2024-07-24 09:43:18.757771] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.163 [2024-07-24 09:43:18.757787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.163 [2024-07-24 09:43:18.757797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.163 [2024-07-24 09:43:18.757810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.757820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.163 [2024-07-24 09:43:18.757832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.757841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.163 [2024-07-24 09:43:18.757853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.163 [2024-07-24 09:43:18.757871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.163 [2024-07-24 09:43:18.757884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.163 [2024-07-24 09:43:18.757893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.163 [2024-07-24 09:43:18.757906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.163 [2024-07-24 09:43:18.757915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.163 [2024-07-24 09:43:18.757929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.163 [2024-07-24 09:43:18.757939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:41.163 [2024-07-24 09:43:18.757950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.757961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.163 [2024-07-24 09:43:18.757973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:41.163 [2024-07-24 09:43:18.757982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.757994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.163 [2024-07-24 09:43:18.758003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.163 [2024-07-24 09:43:18.758024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.163 [2024-07-24 09:43:18.758036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.163 [2024-07-24 09:43:18.758056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.163 [2024-07-24 09:43:18.758066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.163 [2024-07-24 09:43:18.758088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.163 [2024-07-24 09:43:18.758102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.163 [2024-07-24 09:43:18.758122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.163 [2024-07-24 09:43:18.758131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.163 [2024-07-24 09:43:18.758151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.163 [2024-07-24 09:43:18.758163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:41.163 [2024-07-24 09:43:18.758172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.163 [2024-07-24 09:43:18.758184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.163 [2024-07-24 09:43:18.758397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:41.163 [2024-07-24 09:43:18.758439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.163 [2024-07-24 09:43:18.758503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:41.163 [2024-07-24 09:43:18.758532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758564] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.163 [2024-07-24 09:43:18.758593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.163 [2024-07-24 09:43:18.758685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.163 [2024-07-24 09:43:18.758728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.163 [2024-07-24 09:43:18.758765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.163 [2024-07-24 09:43:18.758795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.163 [2024-07-24 09:43:18.758828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.163 [2024-07-24 09:43:18.758857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.163 [2024-07-24 09:43:18.758940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.163 [2024-07-24 09:43:18.758974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.163 [2024-07-24 09:43:18.759012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.163 [2024-07-24 09:43:18.759062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.163 [2024-07-24 09:43:18.759224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:41.163 [2024-07-24 09:43:18.759278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:41.163 [2024-07-24 09:43:18.759325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:41.163 [2024-07-24 09:43:18.759374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:41.163 [2024-07-24 09:43:18.759469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:41.163 [2024-07-24 09:43:18.759524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:41.163 [2024-07-24 09:43:18.759571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:41.163 [2024-07-24 09:43:18.759620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:41.163 [2024-07-24 09:43:18.759791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:41.163 [2024-07-24 09:43:18.759908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.163 [2024-07-24 09:43:18.759919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.163 [2024-07-24 09:43:18.759944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.163 [2024-07-24 09:43:18.759957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.163 [2024-07-24 09:43:18.759968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.163 [2024-07-24 09:43:18.759985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.163 [2024-07-24 09:43:18.759996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.163 [2024-07-24 09:43:18.760013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:17:41.163 [2024-07-24 09:43:18.760024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.163 [2024-07-24 09:43:18.760089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:41.163 [2024-07-24 09:43:18.760103] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:45.361 [2024-07-24 09:43:22.719378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.719439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:45.361 [2024-07-24 09:43:22.719459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3965.712 ms 00:17:45.361 [2024-07-24 09:43:22.719470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.739246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.739308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.361 [2024-07-24 09:43:22.739334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.713 ms 00:17:45.361 [2024-07-24 09:43:22.739349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.739449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.739460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:45.361 [2024-07-24 09:43:22.739474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:45.361 [2024-07-24 09:43:22.739490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.750700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.750750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.361 [2024-07-24 09:43:22.750775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.160 ms 00:17:45.361 [2024-07-24 09:43:22.750789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.750830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.750844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.361 [2024-07-24 09:43:22.750861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:45.361 [2024-07-24 09:43:22.750874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.751396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.751414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.361 [2024-07-24 09:43:22.751432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:17:45.361 [2024-07-24 09:43:22.751449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.751591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.751615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.361 [2024-07-24 09:43:22.751632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:45.361 [2024-07-24 09:43:22.751645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.757655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.757698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.361 [2024-07-24 09:43:22.757714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.992 ms 00:17:45.361 [2024-07-24 09:43:22.757724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.765329] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:45.361 [2024-07-24 09:43:22.771138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.361 [2024-07-24 09:43:22.771182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:45.361 [2024-07-24 09:43:22.771228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.370 ms 00:17:45.361 [2024-07-24 09:43:22.771242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.361 [2024-07-24 09:43:22.858965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.859035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:45.362 [2024-07-24 09:43:22.859050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.836 ms 00:17:45.362 [2024-07-24 09:43:22.859082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.859287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.859305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:45.362 [2024-07-24 09:43:22.859316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:17:45.362 [2024-07-24 09:43:22.859347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.863197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.863249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:45.362 [2024-07-24 09:43:22.863263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:17:45.362 [2024-07-24 09:43:22.863276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.866176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.866228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:45.362 [2024-07-24 09:43:22.866242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:17:45.362 [2024-07-24 09:43:22.866254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.866514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.866531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:45.362 [2024-07-24 09:43:22.866544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:17:45.362 [2024-07-24 09:43:22.866559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.910016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.910065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:45.362 [2024-07-24 09:43:22.910087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.508 ms 00:17:45.362 [2024-07-24 09:43:22.910100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.914632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.914672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:45.362 [2024-07-24 09:43:22.914684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:17:45.362 [2024-07-24 09:43:22.914697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.917913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.917951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:45.362 [2024-07-24 09:43:22.917963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:17:45.362 [2024-07-24 09:43:22.917975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.921547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.921586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:45.362 [2024-07-24 09:43:22.921599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:17:45.362 [2024-07-24 09:43:22.921614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.921681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.921697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:45.362 [2024-07-24 09:43:22.921707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:45.362 [2024-07-24 09:43:22.921719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.921779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.362 [2024-07-24 09:43:22.921796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:45.362 [2024-07-24 09:43:22.921810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:45.362 [2024-07-24 09:43:22.921822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.362 [2024-07-24 09:43:22.922854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4183.759 ms, result 0 00:17:45.362 { 00:17:45.362 "name": "ftl0", 00:17:45.362 "uuid": "9ccca48c-0053-43dd-8273-fd29a37e424b" 00:17:45.362 } 00:17:45.362 09:43:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:45.362 09:43:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:45.362 09:43:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:45.362 09:43:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:45.621 [2024-07-24 09:43:23.232856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:45.621 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:45.621 Zero copy mechanism will not be used. 00:17:45.621 Running I/O for 4 seconds... 00:17:49.813 00:17:49.813 Latency(us) 00:17:49.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.813 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:49.813 ftl0 : 4.00 1443.83 95.88 0.00 0.00 725.66 245.10 5816.65 00:17:49.813 =================================================================================================================== 00:17:49.813 Total : 1443.83 95.88 0.00 0.00 725.66 245.10 5816.65 00:17:49.813 [2024-07-24 09:43:27.232827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:49.813 0 00:17:49.813 09:43:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:49.813 [2024-07-24 09:43:27.327833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:49.813 Running I/O for 4 seconds... 00:17:54.094 00:17:54.094 Latency(us) 00:17:54.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.094 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.094 ftl0 : 4.01 11027.64 43.08 0.00 0.00 11586.19 231.94 32846.96 00:17:54.094 =================================================================================================================== 00:17:54.094 Total : 11027.64 43.08 0.00 0.00 11586.19 0.00 32846.96 00:17:54.094 [2024-07-24 09:43:31.339789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:54.094 0 00:17:54.094 09:43:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:54.094 [2024-07-24 09:43:31.447900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:54.094 Running I/O for 4 seconds... 00:17:58.284 00:17:58.284 Latency(us) 00:17:58.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.284 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:58.284 Verification LBA range: start 0x0 length 0x1400000 00:17:58.284 ftl0 : 4.01 9205.19 35.96 0.00 0.00 13863.12 241.81 25477.45 00:17:58.284 =================================================================================================================== 00:17:58.284 Total : 9205.19 35.96 0.00 0.00 13863.12 0.00 25477.45 00:17:58.284 [2024-07-24 09:43:35.456791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:58.284 0 00:17:58.284 09:43:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:58.284 [2024-07-24 09:43:35.638548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.638614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.284 [2024-07-24 09:43:35.638633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.284 [2024-07-24 09:43:35.638647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.638671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.284 [2024-07-24 09:43:35.639336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.639355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.284 [2024-07-24 09:43:35.639378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:17:58.284 [2024-07-24 09:43:35.639389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.641718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.641769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.284 [2024-07-24 09:43:35.641785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:17:58.284 [2024-07-24 09:43:35.641795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.849010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.849083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.284 [2024-07-24 09:43:35.849104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.508 ms 00:17:58.284 [2024-07-24 09:43:35.849115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.854273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.854312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.284 [2024-07-24 09:43:35.854329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.122 ms 00:17:58.284 [2024-07-24 09:43:35.854341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.856321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.856356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.284 [2024-07-24 09:43:35.856375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:17:58.284 [2024-07-24 09:43:35.856384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.861347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.861390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.284 [2024-07-24 09:43:35.861406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:17:58.284 [2024-07-24 09:43:35.861416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.861527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.861540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.284 [2024-07-24 09:43:35.861553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:58.284 [2024-07-24 09:43:35.861565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.863725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.863780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:58.284 [2024-07-24 09:43:35.863808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:17:58.284 [2024-07-24 09:43:35.863825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.865660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.284 [2024-07-24 09:43:35.865717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:58.284 [2024-07-24 09:43:35.865743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:17:58.284 [2024-07-24 09:43:35.865760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.284 [2024-07-24 09:43:35.867151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.285 [2024-07-24 09:43:35.867198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:58.285 [2024-07-24 09:43:35.867214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:17:58.285 [2024-07-24 09:43:35.867224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.285 [2024-07-24 09:43:35.868382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.285 [2024-07-24 09:43:35.868416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:58.285 [2024-07-24 09:43:35.868430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:17:58.285 [2024-07-24 09:43:35.868440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.285 [2024-07-24 09:43:35.868471] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:58.285 [2024-07-24 09:43:35.868488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.868984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:58.285 [2024-07-24 09:43:35.869338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-07-24 09:43:35.869956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:58.286 [2024-07-24 09:43:35.869970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ccca48c-0053-43dd-8273-fd29a37e424b 00:17:58.286 [2024-07-24 09:43:35.869982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:58.286 [2024-07-24 09:43:35.869993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:58.286 [2024-07-24 09:43:35.870003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:58.286 [2024-07-24 09:43:35.870016] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:58.286 [2024-07-24 09:43:35.870025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:58.286 [2024-07-24 09:43:35.870039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:58.286 [2024-07-24 09:43:35.870052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:58.286 [2024-07-24 09:43:35.870064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:58.286 [2024-07-24 09:43:35.870073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:58.286 [2024-07-24 09:43:35.870085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.286 [2024-07-24 09:43:35.870102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:58.286 [2024-07-24 09:43:35.870116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:17:58.286 [2024-07-24 09:43:35.870132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-07-24 09:43:35.871849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.286 [2024-07-24 09:43:35.871875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:58.286 [2024-07-24 09:43:35.871889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:17:58.286 [2024-07-24 09:43:35.871900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-07-24 09:43:35.872015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.286 [2024-07-24 09:43:35.872026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:58.286 [2024-07-24 09:43:35.872038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:58.286 [2024-07-24 09:43:35.872048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-07-24 09:43:35.878289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.878322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.287 [2024-07-24 09:43:35.878337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.878350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.878409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.878426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.287 [2024-07-24 09:43:35.878440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.878450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.878528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.878542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.287 [2024-07-24 09:43:35.878555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.878565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.878586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.878597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.287 [2024-07-24 09:43:35.878613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.878623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.890996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.891038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.287 [2024-07-24 09:43:35.891055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.891065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.287 [2024-07-24 09:43:35.899211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.287 [2024-07-24 09:43:35.899323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.287 [2024-07-24 09:43:35.899398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.287 [2024-07-24 09:43:35.899513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.287 [2024-07-24 09:43:35.899588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.287 [2024-07-24 09:43:35.899661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-07-24 09:43:35.899730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.287 [2024-07-24 09:43:35.899743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-07-24 09:43:35.899752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-07-24 09:43:35.899883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.713 ms, result 0 00:17:58.287 true 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 89641 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 89641 ']' 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 89641 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89641 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.287 killing process with pid 89641 00:17:58.287 Received shutdown signal, test time was about 4.000000 seconds 00:17:58.287 00:17:58.287 Latency(us) 00:17:58.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.287 =================================================================================================================== 00:17:58.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89641' 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 89641 00:17:58.287 09:43:35 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 89641 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:58.855 Remove shared memory files 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:58.855 00:17:58.855 real 0m21.733s 00:17:58.855 user 0m24.172s 00:17:58.855 sys 0m1.121s 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.855 09:43:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:58.855 ************************************ 00:17:58.855 END TEST ftl_bdevperf 00:17:58.855 ************************************ 00:17:58.855 09:43:36 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:58.855 09:43:36 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:58.855 09:43:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.855 09:43:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:58.855 ************************************ 00:17:58.855 START TEST ftl_trim 00:17:58.855 ************************************ 00:17:58.855 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:59.114 * Looking for test storage... 00:17:59.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=90013 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:59.114 09:43:36 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 90013 00:17:59.114 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 90013 ']' 00:17:59.115 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.115 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.115 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.115 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.115 09:43:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:59.115 [2024-07-24 09:43:36.846353] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:17:59.115 [2024-07-24 09:43:36.846675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90013 ] 00:17:59.374 [2024-07-24 09:43:37.014861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.374 [2024-07-24 09:43:37.060371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.374 [2024-07-24 09:43:37.060402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.374 [2024-07-24 09:43:37.060497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.941 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.941 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:59.941 09:43:37 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:00.199 09:43:37 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:00.199 09:43:37 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:00.199 09:43:37 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:00.199 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:00.199 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:00.199 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:00.199 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:00.199 09:43:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.458 { 00:18:00.458 "name": "nvme0n1", 00:18:00.458 "aliases": [ 00:18:00.458 "908401bc-a996-4a59-8bbf-1a46b4b79e64" 00:18:00.458 ], 00:18:00.458 "product_name": "NVMe disk", 00:18:00.458 "block_size": 4096, 00:18:00.458 "num_blocks": 1310720, 00:18:00.458 "uuid": "908401bc-a996-4a59-8bbf-1a46b4b79e64", 00:18:00.458 "assigned_rate_limits": { 00:18:00.458 "rw_ios_per_sec": 0, 00:18:00.458 "rw_mbytes_per_sec": 0, 00:18:00.458 "r_mbytes_per_sec": 0, 00:18:00.458 "w_mbytes_per_sec": 0 00:18:00.458 }, 00:18:00.458 "claimed": true, 00:18:00.458 "claim_type": "read_many_write_one", 00:18:00.458 "zoned": false, 00:18:00.458 "supported_io_types": { 00:18:00.458 "read": true, 00:18:00.458 "write": true, 00:18:00.458 "unmap": true, 00:18:00.458 "flush": true, 00:18:00.458 "reset": true, 00:18:00.458 "nvme_admin": true, 00:18:00.458 "nvme_io": true, 00:18:00.458 "nvme_io_md": false, 00:18:00.458 "write_zeroes": true, 00:18:00.458 "zcopy": false, 00:18:00.458 "get_zone_info": false, 00:18:00.458 "zone_management": false, 00:18:00.458 "zone_append": false, 00:18:00.458 "compare": true, 00:18:00.458 "compare_and_write": false, 00:18:00.458 "abort": true, 00:18:00.458 "seek_hole": false, 00:18:00.458 "seek_data": false, 00:18:00.458 "copy": true, 00:18:00.458 "nvme_iov_md": false 00:18:00.458 }, 00:18:00.458 "driver_specific": { 00:18:00.458 "nvme": [ 00:18:00.458 { 00:18:00.458 "pci_address": "0000:00:11.0", 00:18:00.458 "trid": { 00:18:00.458 "trtype": "PCIe", 00:18:00.458 "traddr": "0000:00:11.0" 00:18:00.458 }, 00:18:00.458 "ctrlr_data": { 00:18:00.458 "cntlid": 0, 00:18:00.458 "vendor_id": "0x1b36", 00:18:00.458 "model_number": "QEMU NVMe Ctrl", 00:18:00.458 "serial_number": "12341", 00:18:00.458 "firmware_revision": "8.0.0", 00:18:00.458 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.458 "oacs": { 00:18:00.458 "security": 0, 00:18:00.458 "format": 1, 00:18:00.458 "firmware": 0, 00:18:00.458 "ns_manage": 1 00:18:00.458 }, 00:18:00.458 "multi_ctrlr": false, 00:18:00.458 "ana_reporting": false 00:18:00.458 }, 00:18:00.458 "vs": { 00:18:00.458 "nvme_version": "1.4" 00:18:00.458 }, 00:18:00.458 "ns_data": { 00:18:00.458 "id": 1, 00:18:00.458 "can_share": false 00:18:00.458 } 00:18:00.458 } 00:18:00.458 ], 00:18:00.458 "mp_policy": "active_passive" 00:18:00.458 } 00:18:00.458 } 00:18:00.458 ]' 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:00.458 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:00.458 09:43:38 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:00.458 09:43:38 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.458 09:43:38 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:00.458 09:43:38 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.458 09:43:38 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:00.715 09:43:38 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bfdac264-9da7-48f6-a815-b897ec447c2e 00:18:00.715 09:43:38 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:00.715 09:43:38 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfdac264-9da7-48f6-a815-b897ec447c2e 00:18:00.972 09:43:38 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:00.972 09:43:38 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1101f753-a32a-4878-8eb1-76677ef6fae4 00:18:00.972 09:43:38 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1101f753-a32a-4878-8eb1-76677ef6fae4 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:01.230 09:43:38 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.230 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.230 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.230 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:01.230 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:01.230 09:43:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.488 { 00:18:01.488 "name": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:01.488 "aliases": [ 00:18:01.488 "lvs/nvme0n1p0" 00:18:01.488 ], 00:18:01.488 "product_name": "Logical Volume", 00:18:01.488 "block_size": 4096, 00:18:01.488 "num_blocks": 26476544, 00:18:01.488 "uuid": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:01.488 "assigned_rate_limits": { 00:18:01.488 "rw_ios_per_sec": 0, 00:18:01.488 "rw_mbytes_per_sec": 0, 00:18:01.488 "r_mbytes_per_sec": 0, 00:18:01.488 "w_mbytes_per_sec": 0 00:18:01.488 }, 00:18:01.488 "claimed": false, 00:18:01.488 "zoned": false, 00:18:01.488 "supported_io_types": { 00:18:01.488 "read": true, 00:18:01.488 "write": true, 00:18:01.488 "unmap": true, 00:18:01.488 "flush": false, 00:18:01.488 "reset": true, 00:18:01.488 "nvme_admin": false, 00:18:01.488 "nvme_io": false, 00:18:01.488 "nvme_io_md": false, 00:18:01.488 "write_zeroes": true, 00:18:01.488 "zcopy": false, 00:18:01.488 "get_zone_info": false, 00:18:01.488 "zone_management": false, 00:18:01.488 "zone_append": false, 00:18:01.488 "compare": false, 00:18:01.488 "compare_and_write": false, 00:18:01.488 "abort": false, 00:18:01.488 "seek_hole": true, 00:18:01.488 "seek_data": true, 00:18:01.488 "copy": false, 00:18:01.488 "nvme_iov_md": false 00:18:01.488 }, 00:18:01.488 "driver_specific": { 00:18:01.488 "lvol": { 00:18:01.488 "lvol_store_uuid": "1101f753-a32a-4878-8eb1-76677ef6fae4", 00:18:01.488 "base_bdev": "nvme0n1", 00:18:01.488 "thin_provision": true, 00:18:01.488 "num_allocated_clusters": 0, 00:18:01.488 "snapshot": false, 00:18:01.488 "clone": false, 00:18:01.488 "esnap_clone": false 00:18:01.488 } 00:18:01.488 } 00:18:01.488 } 00:18:01.488 ]' 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.488 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.488 09:43:39 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:01.488 09:43:39 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:01.488 09:43:39 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:01.748 09:43:39 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:01.748 09:43:39 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:01.748 09:43:39 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.748 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:01.748 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.748 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:01.748 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:01.748 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.006 { 00:18:02.006 "name": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:02.006 "aliases": [ 00:18:02.006 "lvs/nvme0n1p0" 00:18:02.006 ], 00:18:02.006 "product_name": "Logical Volume", 00:18:02.006 "block_size": 4096, 00:18:02.006 "num_blocks": 26476544, 00:18:02.006 "uuid": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:02.006 "assigned_rate_limits": { 00:18:02.006 "rw_ios_per_sec": 0, 00:18:02.006 "rw_mbytes_per_sec": 0, 00:18:02.006 "r_mbytes_per_sec": 0, 00:18:02.006 "w_mbytes_per_sec": 0 00:18:02.006 }, 00:18:02.006 "claimed": false, 00:18:02.006 "zoned": false, 00:18:02.006 "supported_io_types": { 00:18:02.006 "read": true, 00:18:02.006 "write": true, 00:18:02.006 "unmap": true, 00:18:02.006 "flush": false, 00:18:02.006 "reset": true, 00:18:02.006 "nvme_admin": false, 00:18:02.006 "nvme_io": false, 00:18:02.006 "nvme_io_md": false, 00:18:02.006 "write_zeroes": true, 00:18:02.006 "zcopy": false, 00:18:02.006 "get_zone_info": false, 00:18:02.006 "zone_management": false, 00:18:02.006 "zone_append": false, 00:18:02.006 "compare": false, 00:18:02.006 "compare_and_write": false, 00:18:02.006 "abort": false, 00:18:02.006 "seek_hole": true, 00:18:02.006 "seek_data": true, 00:18:02.006 "copy": false, 00:18:02.006 "nvme_iov_md": false 00:18:02.006 }, 00:18:02.006 "driver_specific": { 00:18:02.006 "lvol": { 00:18:02.006 "lvol_store_uuid": "1101f753-a32a-4878-8eb1-76677ef6fae4", 00:18:02.006 "base_bdev": "nvme0n1", 00:18:02.006 "thin_provision": true, 00:18:02.006 "num_allocated_clusters": 0, 00:18:02.006 "snapshot": false, 00:18:02.006 "clone": false, 00:18:02.006 "esnap_clone": false 00:18:02.006 } 00:18:02.006 } 00:18:02.006 } 00:18:02.006 ]' 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.006 09:43:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.006 09:43:39 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:02.006 09:43:39 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:02.264 09:43:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:02.264 09:43:40 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:02.264 09:43:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:02.264 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:02.264 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:02.264 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:02.264 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:02.264 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e92beb-3101-4ff1-901d-fd7e6ef5af3f 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.522 { 00:18:02.522 "name": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:02.522 "aliases": [ 00:18:02.522 "lvs/nvme0n1p0" 00:18:02.522 ], 00:18:02.522 "product_name": "Logical Volume", 00:18:02.522 "block_size": 4096, 00:18:02.522 "num_blocks": 26476544, 00:18:02.522 "uuid": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:02.522 "assigned_rate_limits": { 00:18:02.522 "rw_ios_per_sec": 0, 00:18:02.522 "rw_mbytes_per_sec": 0, 00:18:02.522 "r_mbytes_per_sec": 0, 00:18:02.522 "w_mbytes_per_sec": 0 00:18:02.522 }, 00:18:02.522 "claimed": false, 00:18:02.522 "zoned": false, 00:18:02.522 "supported_io_types": { 00:18:02.522 "read": true, 00:18:02.522 "write": true, 00:18:02.522 "unmap": true, 00:18:02.522 "flush": false, 00:18:02.522 "reset": true, 00:18:02.522 "nvme_admin": false, 00:18:02.522 "nvme_io": false, 00:18:02.522 "nvme_io_md": false, 00:18:02.522 "write_zeroes": true, 00:18:02.522 "zcopy": false, 00:18:02.522 "get_zone_info": false, 00:18:02.522 "zone_management": false, 00:18:02.522 "zone_append": false, 00:18:02.522 "compare": false, 00:18:02.522 "compare_and_write": false, 00:18:02.522 "abort": false, 00:18:02.522 "seek_hole": true, 00:18:02.522 "seek_data": true, 00:18:02.522 "copy": false, 00:18:02.522 "nvme_iov_md": false 00:18:02.522 }, 00:18:02.522 "driver_specific": { 00:18:02.522 "lvol": { 00:18:02.522 "lvol_store_uuid": "1101f753-a32a-4878-8eb1-76677ef6fae4", 00:18:02.522 "base_bdev": "nvme0n1", 00:18:02.522 "thin_provision": true, 00:18:02.522 "num_allocated_clusters": 0, 00:18:02.522 "snapshot": false, 00:18:02.522 "clone": false, 00:18:02.522 "esnap_clone": false 00:18:02.522 } 00:18:02.522 } 00:18:02.522 } 00:18:02.522 ]' 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.522 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.523 09:43:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.523 09:43:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:02.523 09:43:40 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 26e92beb-3101-4ff1-901d-fd7e6ef5af3f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:02.782 [2024-07-24 09:43:40.474583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.474636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.782 [2024-07-24 09:43:40.474660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.782 [2024-07-24 09:43:40.474673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.477360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.477401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.782 [2024-07-24 09:43:40.477419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:18:02.782 [2024-07-24 09:43:40.477435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.477546] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.782 [2024-07-24 09:43:40.477877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.782 [2024-07-24 09:43:40.477915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.477937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.782 [2024-07-24 09:43:40.477959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:18:02.782 [2024-07-24 09:43:40.477992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.478175] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 06ad3162-afda-493d-8a37-9594fb773b68 00:18:02.782 [2024-07-24 09:43:40.479998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.480039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:02.782 [2024-07-24 09:43:40.480053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:02.782 [2024-07-24 09:43:40.480069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.487571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.487606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.782 [2024-07-24 09:43:40.487620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.436 ms 00:18:02.782 [2024-07-24 09:43:40.487633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.487755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.487784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.782 [2024-07-24 09:43:40.487798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:02.782 [2024-07-24 09:43:40.487810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.487852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.487867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.782 [2024-07-24 09:43:40.487877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:02.782 [2024-07-24 09:43:40.487889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.487921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:02.782 [2024-07-24 09:43:40.489744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.489778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.782 [2024-07-24 09:43:40.489793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:18:02.782 [2024-07-24 09:43:40.489803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.489869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.489881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.782 [2024-07-24 09:43:40.489893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:02.782 [2024-07-24 09:43:40.489917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.489978] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:02.782 [2024-07-24 09:43:40.490137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:02.782 [2024-07-24 09:43:40.490159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.782 [2024-07-24 09:43:40.490173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:02.782 [2024-07-24 09:43:40.490210] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490224] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490238] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:02.782 [2024-07-24 09:43:40.490248] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.782 [2024-07-24 09:43:40.490263] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:02.782 [2024-07-24 09:43:40.490273] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:02.782 [2024-07-24 09:43:40.490291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.490307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.782 [2024-07-24 09:43:40.490342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:02.782 [2024-07-24 09:43:40.490352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.490441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.782 [2024-07-24 09:43:40.490451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.782 [2024-07-24 09:43:40.490469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:02.782 [2024-07-24 09:43:40.490478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.782 [2024-07-24 09:43:40.490592] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.782 [2024-07-24 09:43:40.490603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.782 [2024-07-24 09:43:40.490616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.782 [2024-07-24 09:43:40.490647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.782 [2024-07-24 09:43:40.490680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.782 [2024-07-24 09:43:40.490701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.782 [2024-07-24 09:43:40.490711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:02.782 [2024-07-24 09:43:40.490722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.782 [2024-07-24 09:43:40.490731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.782 [2024-07-24 09:43:40.490745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:02.782 [2024-07-24 09:43:40.490755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.782 [2024-07-24 09:43:40.490776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.782 [2024-07-24 09:43:40.490808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.782 [2024-07-24 09:43:40.490837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.782 [2024-07-24 09:43:40.490871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.782 [2024-07-24 09:43:40.490900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.782 [2024-07-24 09:43:40.490923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.782 [2024-07-24 09:43:40.490934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:02.782 [2024-07-24 09:43:40.490943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.782 [2024-07-24 09:43:40.490954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.782 [2024-07-24 09:43:40.490963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:02.783 [2024-07-24 09:43:40.490974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.783 [2024-07-24 09:43:40.490983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:02.783 [2024-07-24 09:43:40.490995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:02.783 [2024-07-24 09:43:40.491003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.783 [2024-07-24 09:43:40.491014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:02.783 [2024-07-24 09:43:40.491023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:02.783 [2024-07-24 09:43:40.491034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.783 [2024-07-24 09:43:40.491043] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.783 [2024-07-24 09:43:40.491055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.783 [2024-07-24 09:43:40.491065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.783 [2024-07-24 09:43:40.491081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.783 [2024-07-24 09:43:40.491104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.783 [2024-07-24 09:43:40.491120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.783 [2024-07-24 09:43:40.491129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.783 [2024-07-24 09:43:40.491141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.783 [2024-07-24 09:43:40.491150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.783 [2024-07-24 09:43:40.491161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.783 [2024-07-24 09:43:40.491175] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.783 [2024-07-24 09:43:40.491393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.491467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:02.783 [2024-07-24 09:43:40.491519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:02.783 [2024-07-24 09:43:40.491565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:02.783 [2024-07-24 09:43:40.491732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:02.783 [2024-07-24 09:43:40.491823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:02.783 [2024-07-24 09:43:40.491875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:02.783 [2024-07-24 09:43:40.491921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:02.783 [2024-07-24 09:43:40.492037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:02.783 [2024-07-24 09:43:40.492083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:02.783 [2024-07-24 09:43:40.492198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:02.783 [2024-07-24 09:43:40.492694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.783 [2024-07-24 09:43:40.492748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.783 [2024-07-24 09:43:40.492891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.783 [2024-07-24 09:43:40.493035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.783 [2024-07-24 09:43:40.493104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.783 [2024-07-24 09:43:40.493119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.783 [2024-07-24 09:43:40.493133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.783 [2024-07-24 09:43:40.493144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:18:02.783 [2024-07-24 09:43:40.493158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.783 [2024-07-24 09:43:40.493314] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:02.783 [2024-07-24 09:43:40.493336] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:06.974 [2024-07-24 09:43:43.960353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.960420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:06.974 [2024-07-24 09:43:43.960437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3472.669 ms 00:18:06.974 [2024-07-24 09:43:43.960450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.971714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.971777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.974 [2024-07-24 09:43:43.971792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.175 ms 00:18:06.974 [2024-07-24 09:43:43.971822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.971986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.972009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.974 [2024-07-24 09:43:43.972020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:06.974 [2024-07-24 09:43:43.972033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.992182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.992248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.974 [2024-07-24 09:43:43.992267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.143 ms 00:18:06.974 [2024-07-24 09:43:43.992283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.992411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.992431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.974 [2024-07-24 09:43:43.992445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.974 [2024-07-24 09:43:43.992461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.992929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.992949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.974 [2024-07-24 09:43:43.992963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:18:06.974 [2024-07-24 09:43:43.992979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:43.993132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:43.993160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.974 [2024-07-24 09:43:43.993174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:06.974 [2024-07-24 09:43:43.993222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.001069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.001113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.974 [2024-07-24 09:43:44.001127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.813 ms 00:18:06.974 [2024-07-24 09:43:44.001141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.009015] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:06.974 [2024-07-24 09:43:44.025674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.025727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.974 [2024-07-24 09:43:44.025745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.446 ms 00:18:06.974 [2024-07-24 09:43:44.025755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.108977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.109025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:06.974 [2024-07-24 09:43:44.109044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.242 ms 00:18:06.974 [2024-07-24 09:43:44.109058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.109279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.109293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.974 [2024-07-24 09:43:44.109308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:18:06.974 [2024-07-24 09:43:44.109317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.112910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.112947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:06.974 [2024-07-24 09:43:44.112976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:18:06.974 [2024-07-24 09:43:44.112987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.115674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.115707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:06.974 [2024-07-24 09:43:44.115723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:18:06.974 [2024-07-24 09:43:44.115732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.116023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.116039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.974 [2024-07-24 09:43:44.116069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:18:06.974 [2024-07-24 09:43:44.116079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.157662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.157836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:06.974 [2024-07-24 09:43:44.157926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.605 ms 00:18:06.974 [2024-07-24 09:43:44.157967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.162643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.162789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:06.974 [2024-07-24 09:43:44.162875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:18:06.974 [2024-07-24 09:43:44.162926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.974 [2024-07-24 09:43:44.166303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.974 [2024-07-24 09:43:44.166336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:06.975 [2024-07-24 09:43:44.166365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:18:06.975 [2024-07-24 09:43:44.166375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.975 [2024-07-24 09:43:44.170213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.975 [2024-07-24 09:43:44.170250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.975 [2024-07-24 09:43:44.170265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.779 ms 00:18:06.975 [2024-07-24 09:43:44.170275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.975 [2024-07-24 09:43:44.170353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.975 [2024-07-24 09:43:44.170366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.975 [2024-07-24 09:43:44.170379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.975 [2024-07-24 09:43:44.170389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.975 [2024-07-24 09:43:44.170481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.975 [2024-07-24 09:43:44.170493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.975 [2024-07-24 09:43:44.170505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:06.975 [2024-07-24 09:43:44.170518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.975 [2024-07-24 09:43:44.171502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.975 [2024-07-24 09:43:44.172593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3702.588 ms, result 0 00:18:06.975 [2024-07-24 09:43:44.173393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.975 { 00:18:06.975 "name": "ftl0", 00:18:06.975 "uuid": "06ad3162-afda-493d-8a37-9594fb773b68" 00:18:06.975 } 00:18:06.975 09:43:44 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:06.975 [ 00:18:06.975 { 00:18:06.975 "name": "ftl0", 00:18:06.975 "aliases": [ 00:18:06.975 "06ad3162-afda-493d-8a37-9594fb773b68" 00:18:06.975 ], 00:18:06.975 "product_name": "FTL disk", 00:18:06.975 "block_size": 4096, 00:18:06.975 "num_blocks": 23592960, 00:18:06.975 "uuid": "06ad3162-afda-493d-8a37-9594fb773b68", 00:18:06.975 "assigned_rate_limits": { 00:18:06.975 "rw_ios_per_sec": 0, 00:18:06.975 "rw_mbytes_per_sec": 0, 00:18:06.975 "r_mbytes_per_sec": 0, 00:18:06.975 "w_mbytes_per_sec": 0 00:18:06.975 }, 00:18:06.975 "claimed": false, 00:18:06.975 "zoned": false, 00:18:06.975 "supported_io_types": { 00:18:06.975 "read": true, 00:18:06.975 "write": true, 00:18:06.975 "unmap": true, 00:18:06.975 "flush": true, 00:18:06.975 "reset": false, 00:18:06.975 "nvme_admin": false, 00:18:06.975 "nvme_io": false, 00:18:06.975 "nvme_io_md": false, 00:18:06.975 "write_zeroes": true, 00:18:06.975 "zcopy": false, 00:18:06.975 "get_zone_info": false, 00:18:06.975 "zone_management": false, 00:18:06.975 "zone_append": false, 00:18:06.975 "compare": false, 00:18:06.975 "compare_and_write": false, 00:18:06.975 "abort": false, 00:18:06.975 "seek_hole": false, 00:18:06.975 "seek_data": false, 00:18:06.975 "copy": false, 00:18:06.975 "nvme_iov_md": false 00:18:06.975 }, 00:18:06.975 "driver_specific": { 00:18:06.975 "ftl": { 00:18:06.975 "base_bdev": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:06.975 "cache": "nvc0n1p0" 00:18:06.975 } 00:18:06.975 } 00:18:06.975 } 00:18:06.975 ] 00:18:06.975 09:43:44 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:06.975 09:43:44 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:06.975 09:43:44 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:06.975 09:43:44 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:06.975 09:43:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:07.235 09:43:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:07.235 { 00:18:07.235 "name": "ftl0", 00:18:07.235 "aliases": [ 00:18:07.235 "06ad3162-afda-493d-8a37-9594fb773b68" 00:18:07.235 ], 00:18:07.235 "product_name": "FTL disk", 00:18:07.235 "block_size": 4096, 00:18:07.235 "num_blocks": 23592960, 00:18:07.235 "uuid": "06ad3162-afda-493d-8a37-9594fb773b68", 00:18:07.235 "assigned_rate_limits": { 00:18:07.235 "rw_ios_per_sec": 0, 00:18:07.235 "rw_mbytes_per_sec": 0, 00:18:07.235 "r_mbytes_per_sec": 0, 00:18:07.235 "w_mbytes_per_sec": 0 00:18:07.235 }, 00:18:07.235 "claimed": false, 00:18:07.235 "zoned": false, 00:18:07.235 "supported_io_types": { 00:18:07.235 "read": true, 00:18:07.235 "write": true, 00:18:07.235 "unmap": true, 00:18:07.235 "flush": true, 00:18:07.235 "reset": false, 00:18:07.235 "nvme_admin": false, 00:18:07.235 "nvme_io": false, 00:18:07.235 "nvme_io_md": false, 00:18:07.235 "write_zeroes": true, 00:18:07.235 "zcopy": false, 00:18:07.235 "get_zone_info": false, 00:18:07.235 "zone_management": false, 00:18:07.235 "zone_append": false, 00:18:07.235 "compare": false, 00:18:07.235 "compare_and_write": false, 00:18:07.235 "abort": false, 00:18:07.235 "seek_hole": false, 00:18:07.235 "seek_data": false, 00:18:07.235 "copy": false, 00:18:07.235 "nvme_iov_md": false 00:18:07.235 }, 00:18:07.235 "driver_specific": { 00:18:07.235 "ftl": { 00:18:07.235 "base_bdev": "26e92beb-3101-4ff1-901d-fd7e6ef5af3f", 00:18:07.235 "cache": "nvc0n1p0" 00:18:07.235 } 00:18:07.235 } 00:18:07.235 } 00:18:07.235 ]' 00:18:07.235 09:43:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:07.235 09:43:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:07.235 09:43:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:07.496 [2024-07-24 09:43:45.159652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.496 [2024-07-24 09:43:45.159702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.496 [2024-07-24 09:43:45.159717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:07.496 [2024-07-24 09:43:45.159730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.496 [2024-07-24 09:43:45.159766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:07.496 [2024-07-24 09:43:45.160485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.496 [2024-07-24 09:43:45.160500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.496 [2024-07-24 09:43:45.160513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:18:07.496 [2024-07-24 09:43:45.160527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.496 [2024-07-24 09:43:45.161047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.496 [2024-07-24 09:43:45.161063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.496 [2024-07-24 09:43:45.161076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:18:07.496 [2024-07-24 09:43:45.161086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.163914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.163939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.497 [2024-07-24 09:43:45.163953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:18:07.497 [2024-07-24 09:43:45.163963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.169705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.169740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:07.497 [2024-07-24 09:43:45.169755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:18:07.497 [2024-07-24 09:43:45.169765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.171335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.171370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:07.497 [2024-07-24 09:43:45.171384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:18:07.497 [2024-07-24 09:43:45.171394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.176252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.176290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:07.497 [2024-07-24 09:43:45.176306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:18:07.497 [2024-07-24 09:43:45.176319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.176509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.176521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:07.497 [2024-07-24 09:43:45.176549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:07.497 [2024-07-24 09:43:45.176559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.178435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.178468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:07.497 [2024-07-24 09:43:45.178483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:18:07.497 [2024-07-24 09:43:45.178494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.180281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.180314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:07.497 [2024-07-24 09:43:45.180328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:18:07.497 [2024-07-24 09:43:45.180337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.181555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.181589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:07.497 [2024-07-24 09:43:45.181603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:18:07.497 [2024-07-24 09:43:45.181612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.182785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.497 [2024-07-24 09:43:45.182816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:07.497 [2024-07-24 09:43:45.182830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:18:07.497 [2024-07-24 09:43:45.182839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.497 [2024-07-24 09:43:45.182882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:07.497 [2024-07-24 09:43:45.182899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.182992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:07.497 [2024-07-24 09:43:45.183600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.183997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:07.498 [2024-07-24 09:43:45.184151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:07.498 [2024-07-24 09:43:45.184163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:07.498 [2024-07-24 09:43:45.184174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:07.498 [2024-07-24 09:43:45.184199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:07.498 [2024-07-24 09:43:45.184209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:07.498 [2024-07-24 09:43:45.184222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:07.498 [2024-07-24 09:43:45.184231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:07.498 [2024-07-24 09:43:45.184245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:07.498 [2024-07-24 09:43:45.184254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:07.498 [2024-07-24 09:43:45.184266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:07.498 [2024-07-24 09:43:45.184275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:07.498 [2024-07-24 09:43:45.184287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.498 [2024-07-24 09:43:45.184297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:07.498 [2024-07-24 09:43:45.184310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:18:07.498 [2024-07-24 09:43:45.184320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.186601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.498 [2024-07-24 09:43:45.186712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:07.498 [2024-07-24 09:43:45.186786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:18:07.498 [2024-07-24 09:43:45.186821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.186984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.498 [2024-07-24 09:43:45.187018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:07.498 [2024-07-24 09:43:45.187098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:07.498 [2024-07-24 09:43:45.187132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.194182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.498 [2024-07-24 09:43:45.194329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.498 [2024-07-24 09:43:45.194405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.498 [2024-07-24 09:43:45.194439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.194560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.498 [2024-07-24 09:43:45.194596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.498 [2024-07-24 09:43:45.194747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.498 [2024-07-24 09:43:45.194820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.194917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.498 [2024-07-24 09:43:45.194954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.498 [2024-07-24 09:43:45.194987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.498 [2024-07-24 09:43:45.195016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.195076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.498 [2024-07-24 09:43:45.195156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.498 [2024-07-24 09:43:45.195252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.498 [2024-07-24 09:43:45.195285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.498 [2024-07-24 09:43:45.208510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.498 [2024-07-24 09:43:45.208698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.498 [2024-07-24 09:43:45.208784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.498 [2024-07-24 09:43:45.208821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.217235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.217383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.499 [2024-07-24 09:43:45.217459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.217494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.217589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.217627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.499 [2024-07-24 09:43:45.217660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.217689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.217830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.217868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.499 [2024-07-24 09:43:45.217920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.217949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.218076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.218130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.499 [2024-07-24 09:43:45.218167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.218215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.218352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.218454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.499 [2024-07-24 09:43:45.218526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.218559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.218643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.218737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.499 [2024-07-24 09:43:45.218809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.218838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.218944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.499 [2024-07-24 09:43:45.218979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.499 [2024-07-24 09:43:45.219065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.499 [2024-07-24 09:43:45.219099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.499 [2024-07-24 09:43:45.219305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.729 ms, result 0 00:18:07.499 true 00:18:07.499 09:43:45 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 90013 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 90013 ']' 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 90013 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90013 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90013' 00:18:07.499 killing process with pid 90013 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 90013 00:18:07.499 09:43:45 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 90013 00:18:10.789 09:43:48 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:11.725 65536+0 records in 00:18:11.725 65536+0 records out 00:18:11.725 268435456 bytes (268 MB, 256 MiB) copied, 0.971716 s, 276 MB/s 00:18:11.725 09:43:49 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.725 [2024-07-24 09:43:49.339448] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:11.725 [2024-07-24 09:43:49.339579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90197 ] 00:18:11.725 [2024-07-24 09:43:49.503056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.984 [2024-07-24 09:43:49.544896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.984 [2024-07-24 09:43:49.645936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.984 [2024-07-24 09:43:49.646010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.245 [2024-07-24 09:43:49.803940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.245 [2024-07-24 09:43:49.803998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.245 [2024-07-24 09:43:49.804021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.245 [2024-07-24 09:43:49.804031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.245 [2024-07-24 09:43:49.806475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.245 [2024-07-24 09:43:49.806518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.245 [2024-07-24 09:43:49.806531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:18:12.245 [2024-07-24 09:43:49.806541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.245 [2024-07-24 09:43:49.806621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.245 [2024-07-24 09:43:49.806841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.245 [2024-07-24 09:43:49.806864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.245 [2024-07-24 09:43:49.806874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.245 [2024-07-24 09:43:49.806885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:12.245 [2024-07-24 09:43:49.806895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.245 [2024-07-24 09:43:49.808344] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.245 [2024-07-24 09:43:49.810836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.810872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.246 [2024-07-24 09:43:49.810886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 00:18:12.246 [2024-07-24 09:43:49.810896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.810964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.810977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.246 [2024-07-24 09:43:49.810994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:12.246 [2024-07-24 09:43:49.811004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.817608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.817639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.246 [2024-07-24 09:43:49.817651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.568 ms 00:18:12.246 [2024-07-24 09:43:49.817660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.817774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.817791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.246 [2024-07-24 09:43:49.817805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:12.246 [2024-07-24 09:43:49.817814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.817848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.817859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.246 [2024-07-24 09:43:49.817869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:12.246 [2024-07-24 09:43:49.817879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.817903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.246 [2024-07-24 09:43:49.819614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.819645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.246 [2024-07-24 09:43:49.819657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:18:12.246 [2024-07-24 09:43:49.819667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.819739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.819751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.246 [2024-07-24 09:43:49.819762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:12.246 [2024-07-24 09:43:49.819774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.819805] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.246 [2024-07-24 09:43:49.819829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.246 [2024-07-24 09:43:49.819866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.246 [2024-07-24 09:43:49.819895] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:12.246 [2024-07-24 09:43:49.819985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.246 [2024-07-24 09:43:49.819998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.246 [2024-07-24 09:43:49.820018] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:12.246 [2024-07-24 09:43:49.820030] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820042] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820053] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.246 [2024-07-24 09:43:49.820066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.246 [2024-07-24 09:43:49.820076] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.246 [2024-07-24 09:43:49.820085] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.246 [2024-07-24 09:43:49.820095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.820105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.246 [2024-07-24 09:43:49.820117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:12.246 [2024-07-24 09:43:49.820127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.820210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.246 [2024-07-24 09:43:49.820222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.246 [2024-07-24 09:43:49.820232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:12.246 [2024-07-24 09:43:49.820244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.246 [2024-07-24 09:43:49.820329] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.246 [2024-07-24 09:43:49.820341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.246 [2024-07-24 09:43:49.820351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.246 [2024-07-24 09:43:49.820384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.246 [2024-07-24 09:43:49.820427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.246 [2024-07-24 09:43:49.820450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.246 [2024-07-24 09:43:49.820462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.246 [2024-07-24 09:43:49.820472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.246 [2024-07-24 09:43:49.820482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.246 [2024-07-24 09:43:49.820491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:12.246 [2024-07-24 09:43:49.820500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.246 [2024-07-24 09:43:49.820519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.246 [2024-07-24 09:43:49.820546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.246 [2024-07-24 09:43:49.820573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.246 [2024-07-24 09:43:49.820606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.246 [2024-07-24 09:43:49.820633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.246 [2024-07-24 09:43:49.820659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.246 [2024-07-24 09:43:49.820676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.246 [2024-07-24 09:43:49.820685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:12.246 [2024-07-24 09:43:49.820694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.246 [2024-07-24 09:43:49.820703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.246 [2024-07-24 09:43:49.820711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:12.246 [2024-07-24 09:43:49.820720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.246 [2024-07-24 09:43:49.820740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:12.246 [2024-07-24 09:43:49.820750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820760] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.246 [2024-07-24 09:43:49.820770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.246 [2024-07-24 09:43:49.820780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.246 [2024-07-24 09:43:49.820789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.246 [2024-07-24 09:43:49.820799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.246 [2024-07-24 09:43:49.820808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.246 [2024-07-24 09:43:49.820818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.246 [2024-07-24 09:43:49.820827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.246 [2024-07-24 09:43:49.820836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.247 [2024-07-24 09:43:49.820849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.247 [2024-07-24 09:43:49.820863] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.247 [2024-07-24 09:43:49.820890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.820905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.247 [2024-07-24 09:43:49.820916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:12.247 [2024-07-24 09:43:49.820935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.247 [2024-07-24 09:43:49.820949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:12.247 [2024-07-24 09:43:49.820960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:12.247 [2024-07-24 09:43:49.820974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:12.247 [2024-07-24 09:43:49.820984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:12.247 [2024-07-24 09:43:49.820998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:12.247 [2024-07-24 09:43:49.821011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:12.247 [2024-07-24 09:43:49.821025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:12.247 [2024-07-24 09:43:49.821091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.247 [2024-07-24 09:43:49.821106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.247 [2024-07-24 09:43:49.821128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.247 [2024-07-24 09:43:49.821140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.247 [2024-07-24 09:43:49.821151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.247 [2024-07-24 09:43:49.821163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.821173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.247 [2024-07-24 09:43:49.821429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:18:12.247 [2024-07-24 09:43:49.821472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.842573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.842747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.247 [2024-07-24 09:43:49.842881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.045 ms 00:18:12.247 [2024-07-24 09:43:49.842939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.843127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.843291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.247 [2024-07-24 09:43:49.843368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:12.247 [2024-07-24 09:43:49.843407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.854313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.854455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.247 [2024-07-24 09:43:49.854535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.865 ms 00:18:12.247 [2024-07-24 09:43:49.854570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.854665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.854700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.247 [2024-07-24 09:43:49.854735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:12.247 [2024-07-24 09:43:49.854764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.855220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.855262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.247 [2024-07-24 09:43:49.855292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:18:12.247 [2024-07-24 09:43:49.855325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.855531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.855570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.247 [2024-07-24 09:43:49.855601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:12.247 [2024-07-24 09:43:49.855639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.861879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.862009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.247 [2024-07-24 09:43:49.862078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:18:12.247 [2024-07-24 09:43:49.862119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.864731] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:12.247 [2024-07-24 09:43:49.864882] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.247 [2024-07-24 09:43:49.864968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.864982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.247 [2024-07-24 09:43:49.864993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:18:12.247 [2024-07-24 09:43:49.865004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.877845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.877889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.247 [2024-07-24 09:43:49.877903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.791 ms 00:18:12.247 [2024-07-24 09:43:49.877913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.879733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.879765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.247 [2024-07-24 09:43:49.879777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:18:12.247 [2024-07-24 09:43:49.879786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.881354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.881385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.247 [2024-07-24 09:43:49.881396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:18:12.247 [2024-07-24 09:43:49.881405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.881690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.881705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.247 [2024-07-24 09:43:49.881717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:12.247 [2024-07-24 09:43:49.881727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.902315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.902379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:12.247 [2024-07-24 09:43:49.902396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.587 ms 00:18:12.247 [2024-07-24 09:43:49.902406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.908617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:12.247 [2024-07-24 09:43:49.924642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.924689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:12.247 [2024-07-24 09:43:49.924704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.167 ms 00:18:12.247 [2024-07-24 09:43:49.924714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.924811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.924829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:12.247 [2024-07-24 09:43:49.924840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:12.247 [2024-07-24 09:43:49.924862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.924914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.924925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.247 [2024-07-24 09:43:49.924935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:12.247 [2024-07-24 09:43:49.924945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.247 [2024-07-24 09:43:49.924967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.247 [2024-07-24 09:43:49.924977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.248 [2024-07-24 09:43:49.924999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:12.248 [2024-07-24 09:43:49.925009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.248 [2024-07-24 09:43:49.925043] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:12.248 [2024-07-24 09:43:49.925055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.248 [2024-07-24 09:43:49.925064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:12.248 [2024-07-24 09:43:49.925075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:12.248 [2024-07-24 09:43:49.925084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.248 [2024-07-24 09:43:49.928977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.248 [2024-07-24 09:43:49.929015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.248 [2024-07-24 09:43:49.929034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:18:12.248 [2024-07-24 09:43:49.929044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.248 [2024-07-24 09:43:49.929128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.248 [2024-07-24 09:43:49.929150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.248 [2024-07-24 09:43:49.929173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:12.248 [2024-07-24 09:43:49.929207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.248 [2024-07-24 09:43:49.930231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.248 [2024-07-24 09:43:49.931122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.212 ms, result 0 00:18:12.248 [2024-07-24 09:43:49.931739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.248 [2024-07-24 09:43:49.941634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.380  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (24 MBps) Copying: 75/256 [MB] (24 MBps) Copying: 100/256 [MB] (25 MBps) Copying: 126/256 [MB] (25 MBps) Copying: 151/256 [MB] (24 MBps) Copying: 176/256 [MB] (25 MBps) Copying: 200/256 [MB] (24 MBps) Copying: 224/256 [MB] (24 MBps) Copying: 249/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-24 09:44:00.183155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.380 [2024-07-24 09:44:00.184525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.380 [2024-07-24 09:44:00.184554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:22.380 [2024-07-24 09:44:00.184569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.380 [2024-07-24 09:44:00.184579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.380 [2024-07-24 09:44:00.184609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:22.380 [2024-07-24 09:44:00.185271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.380 [2024-07-24 09:44:00.185293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:22.380 [2024-07-24 09:44:00.185304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:18:22.380 [2024-07-24 09:44:00.185320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.380 [2024-07-24 09:44:00.186868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.380 [2024-07-24 09:44:00.186919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:22.380 [2024-07-24 09:44:00.186932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:18:22.380 [2024-07-24 09:44:00.186942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.380 [2024-07-24 09:44:00.193713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.380 [2024-07-24 09:44:00.193751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:22.380 [2024-07-24 09:44:00.193763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:18:22.380 [2024-07-24 09:44:00.193773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.199382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.199428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:22.711 [2024-07-24 09:44:00.199446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:18:22.711 [2024-07-24 09:44:00.199455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.201018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.201054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:22.711 [2024-07-24 09:44:00.201066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:18:22.711 [2024-07-24 09:44:00.201075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.204805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.204839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:22.711 [2024-07-24 09:44:00.204852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.706 ms 00:18:22.711 [2024-07-24 09:44:00.204861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.204977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.205003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:22.711 [2024-07-24 09:44:00.205014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:22.711 [2024-07-24 09:44:00.205024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.207230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.207261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:22.711 [2024-07-24 09:44:00.207272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:18:22.711 [2024-07-24 09:44:00.207281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.208882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.208915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:22.711 [2024-07-24 09:44:00.208926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:18:22.711 [2024-07-24 09:44:00.208935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.210174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.210220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:22.711 [2024-07-24 09:44:00.210231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:18:22.711 [2024-07-24 09:44:00.210241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.711 [2024-07-24 09:44:00.211511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.711 [2024-07-24 09:44:00.211543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:22.711 [2024-07-24 09:44:00.211554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:18:22.711 [2024-07-24 09:44:00.211563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.712 [2024-07-24 09:44:00.211590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:22.712 [2024-07-24 09:44:00.211616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.211999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:22.712 [2024-07-24 09:44:00.212553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:22.713 [2024-07-24 09:44:00.212701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:22.713 [2024-07-24 09:44:00.212711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:22.713 [2024-07-24 09:44:00.212721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:22.713 [2024-07-24 09:44:00.212731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:22.713 [2024-07-24 09:44:00.212740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:22.713 [2024-07-24 09:44:00.212751] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:22.713 [2024-07-24 09:44:00.212765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:22.713 [2024-07-24 09:44:00.212778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:22.713 [2024-07-24 09:44:00.212797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:22.713 [2024-07-24 09:44:00.212806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:22.713 [2024-07-24 09:44:00.212815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:22.713 [2024-07-24 09:44:00.212824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.713 [2024-07-24 09:44:00.212834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:22.713 [2024-07-24 09:44:00.212844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:18:22.713 [2024-07-24 09:44:00.212853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.214565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.713 [2024-07-24 09:44:00.214585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:22.713 [2024-07-24 09:44:00.214601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:18:22.713 [2024-07-24 09:44:00.214611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.214713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.713 [2024-07-24 09:44:00.214724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:22.713 [2024-07-24 09:44:00.214734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:22.713 [2024-07-24 09:44:00.214744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.220945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.220972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.713 [2024-07-24 09:44:00.220983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.220993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.221056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.221067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.713 [2024-07-24 09:44:00.221078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.221087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.221128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.221140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.713 [2024-07-24 09:44:00.221155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.221165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.221220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.221232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.713 [2024-07-24 09:44:00.221242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.221252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.232708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.232912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.713 [2024-07-24 09:44:00.233051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.233088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.241198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.241351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.713 [2024-07-24 09:44:00.241428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.241462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.241512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.241543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.713 [2024-07-24 09:44:00.241574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.241610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.241665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.241700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.713 [2024-07-24 09:44:00.241791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.241826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.241932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.241969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.713 [2024-07-24 09:44:00.241998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.242131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.242256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.242295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:22.713 [2024-07-24 09:44:00.242325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.242407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.242474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.242507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.713 [2024-07-24 09:44:00.242537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.242566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.242705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.713 [2024-07-24 09:44:00.242738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.713 [2024-07-24 09:44:00.242760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.713 [2024-07-24 09:44:00.242778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.713 [2024-07-24 09:44:00.242915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.458 ms, result 0 00:18:22.713 00:18:22.713 00:18:22.994 09:44:00 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=90312 00:18:22.994 09:44:00 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 90312 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 90312 ']' 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.994 09:44:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:22.994 09:44:00 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:22.994 [2024-07-24 09:44:00.608980] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:22.994 [2024-07-24 09:44:00.609110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90312 ] 00:18:22.994 [2024-07-24 09:44:00.766722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.994 [2024-07-24 09:44:00.809319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.930 09:44:01 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.930 09:44:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:23.930 09:44:01 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:23.930 [2024-07-24 09:44:01.579320] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.930 [2024-07-24 09:44:01.579388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.189 [2024-07-24 09:44:01.755090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.755148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.189 [2024-07-24 09:44:01.755166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.189 [2024-07-24 09:44:01.755177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.757594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.757635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.189 [2024-07-24 09:44:01.757653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:18:24.189 [2024-07-24 09:44:01.757663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.757747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.189 [2024-07-24 09:44:01.757969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.189 [2024-07-24 09:44:01.757989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.757999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.189 [2024-07-24 09:44:01.758015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:24.189 [2024-07-24 09:44:01.758025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.759526] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:24.189 [2024-07-24 09:44:01.762024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.762066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:24.189 [2024-07-24 09:44:01.762080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:18:24.189 [2024-07-24 09:44:01.762093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.762158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.762181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:24.189 [2024-07-24 09:44:01.762213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:24.189 [2024-07-24 09:44:01.762233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.768827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.768860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.189 [2024-07-24 09:44:01.768872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.558 ms 00:18:24.189 [2024-07-24 09:44:01.768892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.768990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.769005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.189 [2024-07-24 09:44:01.769019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:24.189 [2024-07-24 09:44:01.769032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.189 [2024-07-24 09:44:01.769062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.189 [2024-07-24 09:44:01.769076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.189 [2024-07-24 09:44:01.769086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:24.190 [2024-07-24 09:44:01.769098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.190 [2024-07-24 09:44:01.769124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:24.190 [2024-07-24 09:44:01.770745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.190 [2024-07-24 09:44:01.770775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.190 [2024-07-24 09:44:01.770793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:18:24.190 [2024-07-24 09:44:01.770803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.190 [2024-07-24 09:44:01.770859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.190 [2024-07-24 09:44:01.770870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.190 [2024-07-24 09:44:01.770890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:24.190 [2024-07-24 09:44:01.770900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.190 [2024-07-24 09:44:01.770925] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:24.190 [2024-07-24 09:44:01.770946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:24.190 [2024-07-24 09:44:01.770993] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:24.190 [2024-07-24 09:44:01.771015] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:24.190 [2024-07-24 09:44:01.771100] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.190 [2024-07-24 09:44:01.771112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.190 [2024-07-24 09:44:01.771128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:24.190 [2024-07-24 09:44:01.771141] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771155] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771167] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:24.190 [2024-07-24 09:44:01.771185] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.190 [2024-07-24 09:44:01.771215] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.190 [2024-07-24 09:44:01.771227] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.190 [2024-07-24 09:44:01.771244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.190 [2024-07-24 09:44:01.771257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.190 [2024-07-24 09:44:01.771267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:18:24.190 [2024-07-24 09:44:01.771279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.190 [2024-07-24 09:44:01.771350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.190 [2024-07-24 09:44:01.771363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.190 [2024-07-24 09:44:01.771373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:24.190 [2024-07-24 09:44:01.771385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.190 [2024-07-24 09:44:01.771486] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.190 [2024-07-24 09:44:01.771502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.190 [2024-07-24 09:44:01.771512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.190 [2024-07-24 09:44:01.771550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.190 [2024-07-24 09:44:01.771584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.190 [2024-07-24 09:44:01.771605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.190 [2024-07-24 09:44:01.771616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:24.190 [2024-07-24 09:44:01.771626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.190 [2024-07-24 09:44:01.771638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.190 [2024-07-24 09:44:01.771647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:24.190 [2024-07-24 09:44:01.771658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.190 [2024-07-24 09:44:01.771693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.190 [2024-07-24 09:44:01.771726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.190 [2024-07-24 09:44:01.771763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.190 [2024-07-24 09:44:01.771793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.190 [2024-07-24 09:44:01.771825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.190 [2024-07-24 09:44:01.771846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.190 [2024-07-24 09:44:01.771855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.190 [2024-07-24 09:44:01.771876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.190 [2024-07-24 09:44:01.771887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:24.190 [2024-07-24 09:44:01.771896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.190 [2024-07-24 09:44:01.771910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.190 [2024-07-24 09:44:01.771919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:24.190 [2024-07-24 09:44:01.771931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.190 [2024-07-24 09:44:01.771953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:24.190 [2024-07-24 09:44:01.771961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.771972] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.190 [2024-07-24 09:44:01.771982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.190 [2024-07-24 09:44:01.771994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.190 [2024-07-24 09:44:01.772003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.190 [2024-07-24 09:44:01.772015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.190 [2024-07-24 09:44:01.772025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.190 [2024-07-24 09:44:01.772038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.190 [2024-07-24 09:44:01.772047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.190 [2024-07-24 09:44:01.772058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.190 [2024-07-24 09:44:01.772068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.190 [2024-07-24 09:44:01.772083] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.190 [2024-07-24 09:44:01.772097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.190 [2024-07-24 09:44:01.772112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:24.190 [2024-07-24 09:44:01.772122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:24.190 [2024-07-24 09:44:01.772135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:24.190 [2024-07-24 09:44:01.772145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:24.190 [2024-07-24 09:44:01.772158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:24.190 [2024-07-24 09:44:01.772168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:24.190 [2024-07-24 09:44:01.772180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:24.190 [2024-07-24 09:44:01.772201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:24.190 [2024-07-24 09:44:01.772214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:24.190 [2024-07-24 09:44:01.772224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:24.190 [2024-07-24 09:44:01.772237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:24.190 [2024-07-24 09:44:01.772248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:24.191 [2024-07-24 09:44:01.772260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:24.191 [2024-07-24 09:44:01.772271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:24.191 [2024-07-24 09:44:01.772287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.191 [2024-07-24 09:44:01.772298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.191 [2024-07-24 09:44:01.772311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.191 [2024-07-24 09:44:01.772323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.191 [2024-07-24 09:44:01.772337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.191 [2024-07-24 09:44:01.772347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.191 [2024-07-24 09:44:01.772360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.772371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.191 [2024-07-24 09:44:01.772383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:18:24.191 [2024-07-24 09:44:01.772392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.784315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.784353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.191 [2024-07-24 09:44:01.784370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.879 ms 00:18:24.191 [2024-07-24 09:44:01.784384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.784507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.784521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.191 [2024-07-24 09:44:01.784537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:24.191 [2024-07-24 09:44:01.784547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.795506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.795545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.191 [2024-07-24 09:44:01.795570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.935 ms 00:18:24.191 [2024-07-24 09:44:01.795582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.795653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.795666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.191 [2024-07-24 09:44:01.795679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:24.191 [2024-07-24 09:44:01.795689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.796122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.796135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.191 [2024-07-24 09:44:01.796148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:24.191 [2024-07-24 09:44:01.796161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.796299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.796313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.191 [2024-07-24 09:44:01.796330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:24.191 [2024-07-24 09:44:01.796340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.803538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.803573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.191 [2024-07-24 09:44:01.803588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.184 ms 00:18:24.191 [2024-07-24 09:44:01.803614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.806343] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:24.191 [2024-07-24 09:44:01.806375] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:24.191 [2024-07-24 09:44:01.806393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.806404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:24.191 [2024-07-24 09:44:01.806417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:18:24.191 [2024-07-24 09:44:01.806427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.819205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.819255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:24.191 [2024-07-24 09:44:01.819273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.731 ms 00:18:24.191 [2024-07-24 09:44:01.819284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.821282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.821314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:24.191 [2024-07-24 09:44:01.821328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:18:24.191 [2024-07-24 09:44:01.821338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.822801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.822832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:24.191 [2024-07-24 09:44:01.822847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:18:24.191 [2024-07-24 09:44:01.822857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.823138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.823153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.191 [2024-07-24 09:44:01.823167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:24.191 [2024-07-24 09:44:01.823177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.855258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.855319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:24.191 [2024-07-24 09:44:01.855337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.082 ms 00:18:24.191 [2024-07-24 09:44:01.855351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.861655] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:24.191 [2024-07-24 09:44:01.877974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.878035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.191 [2024-07-24 09:44:01.878051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.556 ms 00:18:24.191 [2024-07-24 09:44:01.878063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.878176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.878212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:24.191 [2024-07-24 09:44:01.878225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:24.191 [2024-07-24 09:44:01.878237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.878292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.878307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.191 [2024-07-24 09:44:01.878319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:24.191 [2024-07-24 09:44:01.878348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.878373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.878389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.191 [2024-07-24 09:44:01.878400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.191 [2024-07-24 09:44:01.878418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.878451] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:24.191 [2024-07-24 09:44:01.878465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.878475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:24.191 [2024-07-24 09:44:01.878487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:24.191 [2024-07-24 09:44:01.878509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.882152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.882199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.191 [2024-07-24 09:44:01.882219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:18:24.191 [2024-07-24 09:44:01.882229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.882321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.191 [2024-07-24 09:44:01.882334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.191 [2024-07-24 09:44:01.882348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:24.191 [2024-07-24 09:44:01.882357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.191 [2024-07-24 09:44:01.883269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.191 [2024-07-24 09:44:01.884182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.092 ms, result 0 00:18:24.191 [2024-07-24 09:44:01.885354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:24.191 Some configs were skipped because the RPC state that can call them passed over. 00:18:24.191 09:44:01 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:24.450 [2024-07-24 09:44:02.107924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.450 [2024-07-24 09:44:02.108148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:24.450 [2024-07-24 09:44:02.108249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:18:24.450 [2024-07-24 09:44:02.108308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.450 [2024-07-24 09:44:02.108384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.008 ms, result 0 00:18:24.450 true 00:18:24.450 09:44:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:24.709 [2024-07-24 09:44:02.303649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.303712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:24.709 [2024-07-24 09:44:02.303731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:18:24.709 [2024-07-24 09:44:02.303742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.303784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.411 ms, result 0 00:18:24.709 true 00:18:24.709 09:44:02 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 90312 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 90312 ']' 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 90312 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90312 00:18:24.709 killing process with pid 90312 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90312' 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 90312 00:18:24.709 09:44:02 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 90312 00:18:24.709 [2024-07-24 09:44:02.495675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.495738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:24.709 [2024-07-24 09:44:02.495755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.709 [2024-07-24 09:44:02.495774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.495800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:24.709 [2024-07-24 09:44:02.496462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.496477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:24.709 [2024-07-24 09:44:02.496489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:18:24.709 [2024-07-24 09:44:02.496500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.496748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.496760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:24.709 [2024-07-24 09:44:02.496772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:18:24.709 [2024-07-24 09:44:02.496782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.500283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.500413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:24.709 [2024-07-24 09:44:02.500496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.470 ms 00:18:24.709 [2024-07-24 09:44:02.500531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.506352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.506492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:24.709 [2024-07-24 09:44:02.506567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.736 ms 00:18:24.709 [2024-07-24 09:44:02.506602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.508127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.508267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:24.709 [2024-07-24 09:44:02.508349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:18:24.709 [2024-07-24 09:44:02.508384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.512128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.512263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:24.709 [2024-07-24 09:44:02.512346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.646 ms 00:18:24.709 [2024-07-24 09:44:02.512381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.512622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.512711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:24.709 [2024-07-24 09:44:02.512803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:24.709 [2024-07-24 09:44:02.512838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.514943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.515062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:24.709 [2024-07-24 09:44:02.515133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:18:24.709 [2024-07-24 09:44:02.515166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.516641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.516756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:24.709 [2024-07-24 09:44:02.516826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:18:24.709 [2024-07-24 09:44:02.516859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.518038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.709 [2024-07-24 09:44:02.518156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:24.709 [2024-07-24 09:44:02.518242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:18:24.709 [2024-07-24 09:44:02.518257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.709 [2024-07-24 09:44:02.519407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.710 [2024-07-24 09:44:02.519435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:24.710 [2024-07-24 09:44:02.519449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:18:24.710 [2024-07-24 09:44:02.519459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.710 [2024-07-24 09:44:02.519493] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:24.710 [2024-07-24 09:44:02.519509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.519994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.520997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.521990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.522995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.523011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:24.710 [2024-07-24 09:44:02.523030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:24.710 [2024-07-24 09:44:02.523046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:24.710 [2024-07-24 09:44:02.523065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:24.710 [2024-07-24 09:44:02.523077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:24.710 [2024-07-24 09:44:02.523090] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:24.710 [2024-07-24 09:44:02.523103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:24.710 [2024-07-24 09:44:02.523113] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:24.710 [2024-07-24 09:44:02.523126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:24.710 [2024-07-24 09:44:02.523136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:24.710 [2024-07-24 09:44:02.523147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:24.710 [2024-07-24 09:44:02.523156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:24.710 [2024-07-24 09:44:02.523170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.710 [2024-07-24 09:44:02.523180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:24.710 [2024-07-24 09:44:02.523205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:18:24.710 [2024-07-24 09:44:02.523216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.710 [2024-07-24 09:44:02.524937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.710 [2024-07-24 09:44:02.524954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:24.710 [2024-07-24 09:44:02.524967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:18:24.710 [2024-07-24 09:44:02.524977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.710 [2024-07-24 09:44:02.525091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.710 [2024-07-24 09:44:02.525108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:24.710 [2024-07-24 09:44:02.525121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:24.710 [2024-07-24 09:44:02.525133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.967 [2024-07-24 09:44:02.532260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.967 [2024-07-24 09:44:02.532390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.967 [2024-07-24 09:44:02.532509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.967 [2024-07-24 09:44:02.532545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.967 [2024-07-24 09:44:02.532658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.967 [2024-07-24 09:44:02.532694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.967 [2024-07-24 09:44:02.532787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.967 [2024-07-24 09:44:02.532824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.967 [2024-07-24 09:44:02.532907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.967 [2024-07-24 09:44:02.532943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.967 [2024-07-24 09:44:02.532976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.967 [2024-07-24 09:44:02.533045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.533096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.533128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.968 [2024-07-24 09:44:02.533161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.533223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.547154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.547412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.968 [2024-07-24 09:44:02.547554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.547605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.556401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.556560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.968 [2024-07-24 09:44:02.556643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.556681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.556768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.556802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.968 [2024-07-24 09:44:02.556889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.556924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.556985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.557017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.968 [2024-07-24 09:44:02.557049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.557122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.557268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.557313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.968 [2024-07-24 09:44:02.557347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.557469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.557536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.557577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:24.968 [2024-07-24 09:44:02.557668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.557744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.557845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.557920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.968 [2024-07-24 09:44:02.557959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.558035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.558118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.968 [2024-07-24 09:44:02.558152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.968 [2024-07-24 09:44:02.558241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.968 [2024-07-24 09:44:02.558278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.968 [2024-07-24 09:44:02.558446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.849 ms, result 0 00:18:25.225 09:44:02 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:25.225 09:44:02 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.225 [2024-07-24 09:44:02.891002] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:25.225 [2024-07-24 09:44:02.891117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90351 ] 00:18:25.483 [2024-07-24 09:44:03.059716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.483 [2024-07-24 09:44:03.102250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.483 [2024-07-24 09:44:03.203522] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.483 [2024-07-24 09:44:03.203593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.743 [2024-07-24 09:44:03.361397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.743 [2024-07-24 09:44:03.361454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.743 [2024-07-24 09:44:03.361470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:25.743 [2024-07-24 09:44:03.361480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.743 [2024-07-24 09:44:03.363918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.743 [2024-07-24 09:44:03.363957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.743 [2024-07-24 09:44:03.363969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:18:25.743 [2024-07-24 09:44:03.363986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.743 [2024-07-24 09:44:03.364063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.743 [2024-07-24 09:44:03.364307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.743 [2024-07-24 09:44:03.364330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.743 [2024-07-24 09:44:03.364340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.743 [2024-07-24 09:44:03.364352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:25.743 [2024-07-24 09:44:03.364362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.743 [2024-07-24 09:44:03.365827] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:25.743 [2024-07-24 09:44:03.368239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.743 [2024-07-24 09:44:03.368279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:25.743 [2024-07-24 09:44:03.368291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:18:25.744 [2024-07-24 09:44:03.368302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.368368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.368388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:25.744 [2024-07-24 09:44:03.368405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:25.744 [2024-07-24 09:44:03.368415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.375086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.375120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.744 [2024-07-24 09:44:03.375132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.640 ms 00:18:25.744 [2024-07-24 09:44:03.375143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.375281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.375299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.744 [2024-07-24 09:44:03.375329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:25.744 [2024-07-24 09:44:03.375339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.375374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.375385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.744 [2024-07-24 09:44:03.375395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:25.744 [2024-07-24 09:44:03.375405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.375430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:25.744 [2024-07-24 09:44:03.377051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.377078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.744 [2024-07-24 09:44:03.377089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:18:25.744 [2024-07-24 09:44:03.377108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.377199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.377212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.744 [2024-07-24 09:44:03.377230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:25.744 [2024-07-24 09:44:03.377243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.377267] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:25.744 [2024-07-24 09:44:03.377297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:25.744 [2024-07-24 09:44:03.377334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:25.744 [2024-07-24 09:44:03.377356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:25.744 [2024-07-24 09:44:03.377440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.744 [2024-07-24 09:44:03.377453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.744 [2024-07-24 09:44:03.377466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:25.744 [2024-07-24 09:44:03.377479] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.744 [2024-07-24 09:44:03.377491] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.744 [2024-07-24 09:44:03.377502] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:25.744 [2024-07-24 09:44:03.377515] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.744 [2024-07-24 09:44:03.377525] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.744 [2024-07-24 09:44:03.377536] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.744 [2024-07-24 09:44:03.377546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.377556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.744 [2024-07-24 09:44:03.377568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:18:25.744 [2024-07-24 09:44:03.377578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.377650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.744 [2024-07-24 09:44:03.377661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.744 [2024-07-24 09:44:03.377671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:25.744 [2024-07-24 09:44:03.377686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.744 [2024-07-24 09:44:03.377771] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.744 [2024-07-24 09:44:03.377784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.744 [2024-07-24 09:44:03.377801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.744 [2024-07-24 09:44:03.377811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.744 [2024-07-24 09:44:03.377831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:25.744 [2024-07-24 09:44:03.377850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.744 [2024-07-24 09:44:03.377869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.744 [2024-07-24 09:44:03.377891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.744 [2024-07-24 09:44:03.377900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:25.744 [2024-07-24 09:44:03.377911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.744 [2024-07-24 09:44:03.377921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.744 [2024-07-24 09:44:03.377930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:25.744 [2024-07-24 09:44:03.377939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.744 [2024-07-24 09:44:03.377957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:25.744 [2024-07-24 09:44:03.377967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.744 [2024-07-24 09:44:03.377985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:25.744 [2024-07-24 09:44:03.377995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.744 [2024-07-24 09:44:03.378004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.744 [2024-07-24 09:44:03.378013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.744 [2024-07-24 09:44:03.378036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.744 [2024-07-24 09:44:03.378045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.744 [2024-07-24 09:44:03.378068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.744 [2024-07-24 09:44:03.378077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.744 [2024-07-24 09:44:03.378095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.744 [2024-07-24 09:44:03.378104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.744 [2024-07-24 09:44:03.378129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.744 [2024-07-24 09:44:03.378141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:25.744 [2024-07-24 09:44:03.378150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.744 [2024-07-24 09:44:03.378159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.744 [2024-07-24 09:44:03.378176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:25.744 [2024-07-24 09:44:03.378186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.744 [2024-07-24 09:44:03.378216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:25.744 [2024-07-24 09:44:03.378231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.744 [2024-07-24 09:44:03.378243] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.744 [2024-07-24 09:44:03.378264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.744 [2024-07-24 09:44:03.378274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.744 [2024-07-24 09:44:03.378284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.745 [2024-07-24 09:44:03.378294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.745 [2024-07-24 09:44:03.378304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.745 [2024-07-24 09:44:03.378313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.745 [2024-07-24 09:44:03.378322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.745 [2024-07-24 09:44:03.378331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.745 [2024-07-24 09:44:03.378341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.745 [2024-07-24 09:44:03.378351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.745 [2024-07-24 09:44:03.378367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:25.745 [2024-07-24 09:44:03.378388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:25.745 [2024-07-24 09:44:03.378400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:25.745 [2024-07-24 09:44:03.378410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:25.745 [2024-07-24 09:44:03.378420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:25.745 [2024-07-24 09:44:03.378431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:25.745 [2024-07-24 09:44:03.378441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:25.745 [2024-07-24 09:44:03.378451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:25.745 [2024-07-24 09:44:03.378461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:25.745 [2024-07-24 09:44:03.378472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:25.745 [2024-07-24 09:44:03.378523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.745 [2024-07-24 09:44:03.378535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.745 [2024-07-24 09:44:03.378558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.745 [2024-07-24 09:44:03.378570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.745 [2024-07-24 09:44:03.378580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.745 [2024-07-24 09:44:03.378591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.378603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.745 [2024-07-24 09:44:03.378613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:18:25.745 [2024-07-24 09:44:03.378623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.399599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.399826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.745 [2024-07-24 09:44:03.399929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.946 ms 00:18:25.745 [2024-07-24 09:44:03.399967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.400154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.400270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.745 [2024-07-24 09:44:03.400360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:25.745 [2024-07-24 09:44:03.400391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.411573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.411750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.745 [2024-07-24 09:44:03.411923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.147 ms 00:18:25.745 [2024-07-24 09:44:03.411971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.412108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.412234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.745 [2024-07-24 09:44:03.412271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:25.745 [2024-07-24 09:44:03.412349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.412816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.412915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.745 [2024-07-24 09:44:03.412990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:18:25.745 [2024-07-24 09:44:03.413029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.413186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.413242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.745 [2024-07-24 09:44:03.413311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:25.745 [2024-07-24 09:44:03.413344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.419618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.419745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.745 [2024-07-24 09:44:03.419879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:18:25.745 [2024-07-24 09:44:03.419929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.422620] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:25.745 [2024-07-24 09:44:03.422770] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:25.745 [2024-07-24 09:44:03.422860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.422892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:25.745 [2024-07-24 09:44:03.422922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:18:25.745 [2024-07-24 09:44:03.423029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.435743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.435873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:25.745 [2024-07-24 09:44:03.436007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:18:25.745 [2024-07-24 09:44:03.436029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.437612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.437645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:25.745 [2024-07-24 09:44:03.437656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:18:25.745 [2024-07-24 09:44:03.437666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.439240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.439270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:25.745 [2024-07-24 09:44:03.439281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:18:25.745 [2024-07-24 09:44:03.439291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.439573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.439589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.745 [2024-07-24 09:44:03.439600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:25.745 [2024-07-24 09:44:03.439610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.460921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.460980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:25.745 [2024-07-24 09:44:03.460995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.310 ms 00:18:25.745 [2024-07-24 09:44:03.461014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.467364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:25.745 [2024-07-24 09:44:03.483770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.483809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.745 [2024-07-24 09:44:03.483832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.684 ms 00:18:25.745 [2024-07-24 09:44:03.483843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.483941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.483955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:25.745 [2024-07-24 09:44:03.483967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:25.745 [2024-07-24 09:44:03.483984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.484038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.745 [2024-07-24 09:44:03.484049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.745 [2024-07-24 09:44:03.484059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:25.745 [2024-07-24 09:44:03.484069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.745 [2024-07-24 09:44:03.484103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.746 [2024-07-24 09:44:03.484113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.746 [2024-07-24 09:44:03.484124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:25.746 [2024-07-24 09:44:03.484133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.746 [2024-07-24 09:44:03.484167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:25.746 [2024-07-24 09:44:03.484179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.746 [2024-07-24 09:44:03.484207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:25.746 [2024-07-24 09:44:03.484218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:25.746 [2024-07-24 09:44:03.484228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.746 [2024-07-24 09:44:03.487897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.746 [2024-07-24 09:44:03.487938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.746 [2024-07-24 09:44:03.487951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:18:25.746 [2024-07-24 09:44:03.487961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.746 [2024-07-24 09:44:03.488043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.746 [2024-07-24 09:44:03.488056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.746 [2024-07-24 09:44:03.488067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:25.746 [2024-07-24 09:44:03.488077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.746 [2024-07-24 09:44:03.489008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.746 [2024-07-24 09:44:03.489935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.537 ms, result 0 00:18:25.746 [2024-07-24 09:44:03.490591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.746 [2024-07-24 09:44:03.500528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.416  Copying: 29/256 [MB] (29 MBps) Copying: 56/256 [MB] (26 MBps) Copying: 82/256 [MB] (26 MBps) Copying: 109/256 [MB] (26 MBps) Copying: 137/256 [MB] (27 MBps) Copying: 163/256 [MB] (25 MBps) Copying: 189/256 [MB] (26 MBps) Copying: 215/256 [MB] (25 MBps) Copying: 241/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-24 09:44:13.063099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.416 [2024-07-24 09:44:13.064458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.064481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.416 [2024-07-24 09:44:13.064495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.416 [2024-07-24 09:44:13.064506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.064527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.416 [2024-07-24 09:44:13.065179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.065210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.416 [2024-07-24 09:44:13.065221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:18:35.416 [2024-07-24 09:44:13.065247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.065461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.065472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.416 [2024-07-24 09:44:13.065482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:35.416 [2024-07-24 09:44:13.065492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.068343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.068368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.416 [2024-07-24 09:44:13.068379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:18:35.416 [2024-07-24 09:44:13.068389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.074172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.074222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.416 [2024-07-24 09:44:13.074234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.759 ms 00:18:35.416 [2024-07-24 09:44:13.074244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.075867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.075904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.416 [2024-07-24 09:44:13.075917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:18:35.416 [2024-07-24 09:44:13.075926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.079451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.079487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.416 [2024-07-24 09:44:13.079500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:18:35.416 [2024-07-24 09:44:13.079510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.079635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.079647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.416 [2024-07-24 09:44:13.079658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:35.416 [2024-07-24 09:44:13.079667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.081807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.416 [2024-07-24 09:44:13.081842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:35.416 [2024-07-24 09:44:13.081853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:18:35.416 [2024-07-24 09:44:13.081862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.416 [2024-07-24 09:44:13.083393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.417 [2024-07-24 09:44:13.083426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:35.417 [2024-07-24 09:44:13.083438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.503 ms 00:18:35.417 [2024-07-24 09:44:13.083447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.417 [2024-07-24 09:44:13.084636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.417 [2024-07-24 09:44:13.084667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.417 [2024-07-24 09:44:13.084679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:18:35.417 [2024-07-24 09:44:13.084688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.417 [2024-07-24 09:44:13.085980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.417 [2024-07-24 09:44:13.086015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.417 [2024-07-24 09:44:13.086026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:18:35.417 [2024-07-24 09:44:13.086035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.417 [2024-07-24 09:44:13.086062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.417 [2024-07-24 09:44:13.086078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.417 [2024-07-24 09:44:13.086870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.086992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.418 [2024-07-24 09:44:13.087131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.418 [2024-07-24 09:44:13.087141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:35.418 [2024-07-24 09:44:13.087152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.418 [2024-07-24 09:44:13.087162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.418 [2024-07-24 09:44:13.087183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.418 [2024-07-24 09:44:13.087204] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.418 [2024-07-24 09:44:13.087217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.418 [2024-07-24 09:44:13.087237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.418 [2024-07-24 09:44:13.087247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.418 [2024-07-24 09:44:13.087256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.418 [2024-07-24 09:44:13.087264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.418 [2024-07-24 09:44:13.087275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.418 [2024-07-24 09:44:13.087284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.418 [2024-07-24 09:44:13.087294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:18:35.418 [2024-07-24 09:44:13.087304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.088999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.418 [2024-07-24 09:44:13.089023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.418 [2024-07-24 09:44:13.089034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:18:35.418 [2024-07-24 09:44:13.089044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.089172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.418 [2024-07-24 09:44:13.089183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.418 [2024-07-24 09:44:13.089205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:35.418 [2024-07-24 09:44:13.089214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.095416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.095443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.418 [2024-07-24 09:44:13.095454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.095464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.095513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.095524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.418 [2024-07-24 09:44:13.095534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.095543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.095583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.095606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.418 [2024-07-24 09:44:13.095623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.095632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.095650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.095661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.418 [2024-07-24 09:44:13.095671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.095680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.107978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.108018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.418 [2024-07-24 09:44:13.108031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.108041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.418 [2024-07-24 09:44:13.116266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.418 [2024-07-24 09:44:13.116330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.418 [2024-07-24 09:44:13.116397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.418 [2024-07-24 09:44:13.116503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.418 [2024-07-24 09:44:13.116581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.418 [2024-07-24 09:44:13.116647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.418 [2024-07-24 09:44:13.116714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.418 [2024-07-24 09:44:13.116723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.418 [2024-07-24 09:44:13.116732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.418 [2024-07-24 09:44:13.116868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.464 ms, result 0 00:18:35.677 00:18:35.677 00:18:35.677 09:44:13 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:35.677 09:44:13 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:36.243 09:44:13 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.243 [2024-07-24 09:44:13.900895] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:36.243 [2024-07-24 09:44:13.901030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90469 ] 00:18:36.502 [2024-07-24 09:44:14.067801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.502 [2024-07-24 09:44:14.110440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.502 [2024-07-24 09:44:14.211658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.502 [2024-07-24 09:44:14.211731] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.762 [2024-07-24 09:44:14.370012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.370076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:36.762 [2024-07-24 09:44:14.370100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.762 [2024-07-24 09:44:14.370110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.372484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.372525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.762 [2024-07-24 09:44:14.372538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:18:36.762 [2024-07-24 09:44:14.372548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.372622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:36.762 [2024-07-24 09:44:14.372913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:36.762 [2024-07-24 09:44:14.372946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.372957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.762 [2024-07-24 09:44:14.372967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:18:36.762 [2024-07-24 09:44:14.372977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.374482] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:36.762 [2024-07-24 09:44:14.376941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.376975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:36.762 [2024-07-24 09:44:14.376988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:18:36.762 [2024-07-24 09:44:14.376997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.377073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.377086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:36.762 [2024-07-24 09:44:14.377111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:36.762 [2024-07-24 09:44:14.377120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.383686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.383714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.762 [2024-07-24 09:44:14.383725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.527 ms 00:18:36.762 [2024-07-24 09:44:14.383735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.383848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.383865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.762 [2024-07-24 09:44:14.383878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:36.762 [2024-07-24 09:44:14.383888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.383928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.383939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:36.762 [2024-07-24 09:44:14.383949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:36.762 [2024-07-24 09:44:14.383959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.383987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:36.762 [2024-07-24 09:44:14.385635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.385667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.762 [2024-07-24 09:44:14.385679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:18:36.762 [2024-07-24 09:44:14.385689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.385741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.385753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:36.762 [2024-07-24 09:44:14.385763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:36.762 [2024-07-24 09:44:14.385776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.385804] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:36.762 [2024-07-24 09:44:14.385839] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:36.762 [2024-07-24 09:44:14.385892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:36.762 [2024-07-24 09:44:14.385928] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:36.762 [2024-07-24 09:44:14.386013] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:36.762 [2024-07-24 09:44:14.386026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:36.762 [2024-07-24 09:44:14.386039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:36.762 [2024-07-24 09:44:14.386052] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386063] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386073] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:36.762 [2024-07-24 09:44:14.386087] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:36.762 [2024-07-24 09:44:14.386108] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:36.762 [2024-07-24 09:44:14.386118] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:36.762 [2024-07-24 09:44:14.386131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.386141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:36.762 [2024-07-24 09:44:14.386154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:18:36.762 [2024-07-24 09:44:14.386168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.386270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.762 [2024-07-24 09:44:14.386283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:36.762 [2024-07-24 09:44:14.386293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:36.762 [2024-07-24 09:44:14.386305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.762 [2024-07-24 09:44:14.386392] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:36.762 [2024-07-24 09:44:14.386404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:36.762 [2024-07-24 09:44:14.386414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:36.762 [2024-07-24 09:44:14.386443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:36.762 [2024-07-24 09:44:14.386479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.762 [2024-07-24 09:44:14.386501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:36.762 [2024-07-24 09:44:14.386510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:36.762 [2024-07-24 09:44:14.386519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.762 [2024-07-24 09:44:14.386530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:36.762 [2024-07-24 09:44:14.386539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:36.762 [2024-07-24 09:44:14.386548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:36.762 [2024-07-24 09:44:14.386565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:36.762 [2024-07-24 09:44:14.386592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:36.762 [2024-07-24 09:44:14.386619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.762 [2024-07-24 09:44:14.386645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:36.762 [2024-07-24 09:44:14.386654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:36.762 [2024-07-24 09:44:14.386663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.763 [2024-07-24 09:44:14.386671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:36.763 [2024-07-24 09:44:14.386680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:36.763 [2024-07-24 09:44:14.386689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.763 [2024-07-24 09:44:14.386697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:36.763 [2024-07-24 09:44:14.386707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:36.763 [2024-07-24 09:44:14.386715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.763 [2024-07-24 09:44:14.386741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:36.763 [2024-07-24 09:44:14.386751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:36.763 [2024-07-24 09:44:14.386760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.763 [2024-07-24 09:44:14.386769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:36.763 [2024-07-24 09:44:14.386791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:36.763 [2024-07-24 09:44:14.386799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.763 [2024-07-24 09:44:14.386808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:36.763 [2024-07-24 09:44:14.386825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:36.763 [2024-07-24 09:44:14.386837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.763 [2024-07-24 09:44:14.386846] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:36.763 [2024-07-24 09:44:14.386867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:36.763 [2024-07-24 09:44:14.386878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.763 [2024-07-24 09:44:14.386891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.763 [2024-07-24 09:44:14.386905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:36.763 [2024-07-24 09:44:14.386914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:36.763 [2024-07-24 09:44:14.386944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:36.763 [2024-07-24 09:44:14.386957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:36.763 [2024-07-24 09:44:14.386967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:36.763 [2024-07-24 09:44:14.386977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:36.763 [2024-07-24 09:44:14.386992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:36.763 [2024-07-24 09:44:14.387011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:36.763 [2024-07-24 09:44:14.387041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:36.763 [2024-07-24 09:44:14.387068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:36.763 [2024-07-24 09:44:14.387082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:36.763 [2024-07-24 09:44:14.387096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:36.763 [2024-07-24 09:44:14.387111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:36.763 [2024-07-24 09:44:14.387121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:36.763 [2024-07-24 09:44:14.387132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:36.763 [2024-07-24 09:44:14.387143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:36.763 [2024-07-24 09:44:14.387153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:36.763 [2024-07-24 09:44:14.387217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:36.763 [2024-07-24 09:44:14.387228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:36.763 [2024-07-24 09:44:14.387251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:36.763 [2024-07-24 09:44:14.387265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:36.763 [2024-07-24 09:44:14.387276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:36.763 [2024-07-24 09:44:14.387287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.387297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:36.763 [2024-07-24 09:44:14.387308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:18:36.763 [2024-07-24 09:44:14.387318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.408660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.408716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.763 [2024-07-24 09:44:14.408739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.308 ms 00:18:36.763 [2024-07-24 09:44:14.408752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.408915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.408932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:36.763 [2024-07-24 09:44:14.408946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:36.763 [2024-07-24 09:44:14.408959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.419747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.419788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.763 [2024-07-24 09:44:14.419805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.777 ms 00:18:36.763 [2024-07-24 09:44:14.419815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.419892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.419904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.763 [2024-07-24 09:44:14.419918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.763 [2024-07-24 09:44:14.419928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.420371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.420400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.763 [2024-07-24 09:44:14.420410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:18:36.763 [2024-07-24 09:44:14.420424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.420541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.420553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.763 [2024-07-24 09:44:14.420563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:36.763 [2024-07-24 09:44:14.420573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.426790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.426826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.763 [2024-07-24 09:44:14.426839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:18:36.763 [2024-07-24 09:44:14.426853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.429481] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:36.763 [2024-07-24 09:44:14.429524] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:36.763 [2024-07-24 09:44:14.429545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.429556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:36.763 [2024-07-24 09:44:14.429567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:18:36.763 [2024-07-24 09:44:14.429576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.442391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.442431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:36.763 [2024-07-24 09:44:14.442446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:18:36.763 [2024-07-24 09:44:14.442460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.444160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.444202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:36.763 [2024-07-24 09:44:14.444215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:18:36.763 [2024-07-24 09:44:14.444224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.763 [2024-07-24 09:44:14.445712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.763 [2024-07-24 09:44:14.445749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:36.763 [2024-07-24 09:44:14.445760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:18:36.764 [2024-07-24 09:44:14.445769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.446057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.446073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.764 [2024-07-24 09:44:14.446085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:18:36.764 [2024-07-24 09:44:14.446103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.467063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.467129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:36.764 [2024-07-24 09:44:14.467145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.966 ms 00:18:36.764 [2024-07-24 09:44:14.467164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.473455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:36.764 [2024-07-24 09:44:14.489750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.489800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.764 [2024-07-24 09:44:14.489816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.505 ms 00:18:36.764 [2024-07-24 09:44:14.489826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.489930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.489944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:36.764 [2024-07-24 09:44:14.489955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:36.764 [2024-07-24 09:44:14.489965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.490021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.490032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.764 [2024-07-24 09:44:14.490043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:36.764 [2024-07-24 09:44:14.490052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.490079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.490089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.764 [2024-07-24 09:44:14.490099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:36.764 [2024-07-24 09:44:14.490109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.490141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:36.764 [2024-07-24 09:44:14.490162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.490171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:36.764 [2024-07-24 09:44:14.490182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:36.764 [2024-07-24 09:44:14.490205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.493891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.493936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.764 [2024-07-24 09:44:14.493949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:18:36.764 [2024-07-24 09:44:14.493959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.494042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.764 [2024-07-24 09:44:14.494055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.764 [2024-07-24 09:44:14.494066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:36.764 [2024-07-24 09:44:14.494076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.764 [2024-07-24 09:44:14.494995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:36.764 [2024-07-24 09:44:14.495920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.922 ms, result 0 00:18:36.764 [2024-07-24 09:44:14.496615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.764 [2024-07-24 09:44:14.506628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:37.025  Copying: 4096/4096 [kB] (average 23 MBps)[2024-07-24 09:44:14.677486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.025 [2024-07-24 09:44:14.678825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.678861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:37.025 [2024-07-24 09:44:14.678875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:37.025 [2024-07-24 09:44:14.678886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.678907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:37.025 [2024-07-24 09:44:14.679564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.679587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:37.025 [2024-07-24 09:44:14.679607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:18:37.025 [2024-07-24 09:44:14.679627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.681508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.681551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:37.025 [2024-07-24 09:44:14.681564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:18:37.025 [2024-07-24 09:44:14.681574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.684671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.684701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:37.025 [2024-07-24 09:44:14.684712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:18:37.025 [2024-07-24 09:44:14.684722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.690423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.690460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:37.025 [2024-07-24 09:44:14.690472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.674 ms 00:18:37.025 [2024-07-24 09:44:14.690481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.691966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.692003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.025 [2024-07-24 09:44:14.692014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:18:37.025 [2024-07-24 09:44:14.692023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.695695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.695735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.025 [2024-07-24 09:44:14.695747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:18:37.025 [2024-07-24 09:44:14.695757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.695878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.695890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.025 [2024-07-24 09:44:14.695901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:37.025 [2024-07-24 09:44:14.695910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.697738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.697775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:37.025 [2024-07-24 09:44:14.697786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:18:37.025 [2024-07-24 09:44:14.697795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.699343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.699375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:37.025 [2024-07-24 09:44:14.699386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:18:37.025 [2024-07-24 09:44:14.699395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.700582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.700615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.025 [2024-07-24 09:44:14.700626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:18:37.025 [2024-07-24 09:44:14.700635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.701831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.025 [2024-07-24 09:44:14.701868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.025 [2024-07-24 09:44:14.701879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:18:37.025 [2024-07-24 09:44:14.701888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.025 [2024-07-24 09:44:14.701951] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.025 [2024-07-24 09:44:14.701978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.701990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.025 [2024-07-24 09:44:14.702445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.702996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.703006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.026 [2024-07-24 09:44:14.703023] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.026 [2024-07-24 09:44:14.703034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:37.026 [2024-07-24 09:44:14.703045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.026 [2024-07-24 09:44:14.703054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.026 [2024-07-24 09:44:14.703068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.026 [2024-07-24 09:44:14.703078] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.026 [2024-07-24 09:44:14.703094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.026 [2024-07-24 09:44:14.703112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.026 [2024-07-24 09:44:14.703122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.026 [2024-07-24 09:44:14.703131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.026 [2024-07-24 09:44:14.703140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.026 [2024-07-24 09:44:14.703149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.026 [2024-07-24 09:44:14.703159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.026 [2024-07-24 09:44:14.703169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:18:37.026 [2024-07-24 09:44:14.703178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.704892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.026 [2024-07-24 09:44:14.704914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.026 [2024-07-24 09:44:14.704925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:18:37.026 [2024-07-24 09:44:14.704934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.705035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.026 [2024-07-24 09:44:14.705047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.026 [2024-07-24 09:44:14.705057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:37.026 [2024-07-24 09:44:14.705066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.711277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.026 [2024-07-24 09:44:14.711305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.026 [2024-07-24 09:44:14.711316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.026 [2024-07-24 09:44:14.711326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.711390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.026 [2024-07-24 09:44:14.711401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.026 [2024-07-24 09:44:14.711421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.026 [2024-07-24 09:44:14.711431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.711472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.026 [2024-07-24 09:44:14.711488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.026 [2024-07-24 09:44:14.711497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.026 [2024-07-24 09:44:14.711514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.711533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.026 [2024-07-24 09:44:14.711543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.026 [2024-07-24 09:44:14.711552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.026 [2024-07-24 09:44:14.711562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.026 [2024-07-24 09:44:14.723340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.026 [2024-07-24 09:44:14.723384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.026 [2024-07-24 09:44:14.723397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.026 [2024-07-24 09:44:14.723415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.731590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.731629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.027 [2024-07-24 09:44:14.731658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.731669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.731694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.731705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.027 [2024-07-24 09:44:14.731722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.731732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.731761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.731771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.027 [2024-07-24 09:44:14.731781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.731791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.731864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.731877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.027 [2024-07-24 09:44:14.731887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.731901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.731935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.731947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:37.027 [2024-07-24 09:44:14.731957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.731974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.732018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.732031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.027 [2024-07-24 09:44:14.732041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.732054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.732098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.027 [2024-07-24 09:44:14.732109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.027 [2024-07-24 09:44:14.732119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.027 [2024-07-24 09:44:14.732129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.027 [2024-07-24 09:44:14.732281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.515 ms, result 0 00:18:37.286 00:18:37.286 00:18:37.286 09:44:14 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90483 00:18:37.286 09:44:14 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:37.286 09:44:14 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90483 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 90483 ']' 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.286 09:44:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:37.286 [2024-07-24 09:44:15.071037] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:37.286 [2024-07-24 09:44:15.071159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90483 ] 00:18:37.545 [2024-07-24 09:44:15.235066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.545 [2024-07-24 09:44:15.279122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.141 09:44:15 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.141 09:44:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:38.141 09:44:15 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:38.412 [2024-07-24 09:44:16.063340] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.412 [2024-07-24 09:44:16.063411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.672 [2024-07-24 09:44:16.237387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.237444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:38.672 [2024-07-24 09:44:16.237461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.672 [2024-07-24 09:44:16.237478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.239863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.239902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.672 [2024-07-24 09:44:16.239919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:18:38.672 [2024-07-24 09:44:16.239929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.240007] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:38.672 [2024-07-24 09:44:16.240293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:38.672 [2024-07-24 09:44:16.240321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.240333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.672 [2024-07-24 09:44:16.240346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:18:38.672 [2024-07-24 09:44:16.240356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.241844] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:38.672 [2024-07-24 09:44:16.244367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.244405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:38.672 [2024-07-24 09:44:16.244417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:18:38.672 [2024-07-24 09:44:16.244430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.244491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.244506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:38.672 [2024-07-24 09:44:16.244516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:38.672 [2024-07-24 09:44:16.244534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.251102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.251141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.672 [2024-07-24 09:44:16.251153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.534 ms 00:18:38.672 [2024-07-24 09:44:16.251165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.251278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.251296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.672 [2024-07-24 09:44:16.251312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:38.672 [2024-07-24 09:44:16.251323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.251354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.251367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:38.672 [2024-07-24 09:44:16.251377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:38.672 [2024-07-24 09:44:16.251389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.251414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:38.672 [2024-07-24 09:44:16.252996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.253026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.672 [2024-07-24 09:44:16.253042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:18:38.672 [2024-07-24 09:44:16.253052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.253101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.672 [2024-07-24 09:44:16.253112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:38.672 [2024-07-24 09:44:16.253125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:38.672 [2024-07-24 09:44:16.253135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.672 [2024-07-24 09:44:16.253176] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:38.672 [2024-07-24 09:44:16.253207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:38.672 [2024-07-24 09:44:16.253254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:38.673 [2024-07-24 09:44:16.253276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:38.673 [2024-07-24 09:44:16.253362] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:38.673 [2024-07-24 09:44:16.253375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:38.673 [2024-07-24 09:44:16.253390] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:38.673 [2024-07-24 09:44:16.253403] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253417] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253428] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:38.673 [2024-07-24 09:44:16.253445] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:38.673 [2024-07-24 09:44:16.253454] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:38.673 [2024-07-24 09:44:16.253466] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:38.673 [2024-07-24 09:44:16.253476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.673 [2024-07-24 09:44:16.253489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:38.673 [2024-07-24 09:44:16.253500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:38.673 [2024-07-24 09:44:16.253511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.673 [2024-07-24 09:44:16.253583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.673 [2024-07-24 09:44:16.253596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:38.673 [2024-07-24 09:44:16.253606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:38.673 [2024-07-24 09:44:16.253617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.673 [2024-07-24 09:44:16.253711] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:38.673 [2024-07-24 09:44:16.253728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:38.673 [2024-07-24 09:44:16.253739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:38.673 [2024-07-24 09:44:16.253785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:38.673 [2024-07-24 09:44:16.253815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.673 [2024-07-24 09:44:16.253836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:38.673 [2024-07-24 09:44:16.253848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:38.673 [2024-07-24 09:44:16.253858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.673 [2024-07-24 09:44:16.253869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:38.673 [2024-07-24 09:44:16.253878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:38.673 [2024-07-24 09:44:16.253889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:38.673 [2024-07-24 09:44:16.253919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:38.673 [2024-07-24 09:44:16.253953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.673 [2024-07-24 09:44:16.253976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:38.673 [2024-07-24 09:44:16.253988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:38.673 [2024-07-24 09:44:16.253997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.673 [2024-07-24 09:44:16.254017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:38.673 [2024-07-24 09:44:16.254026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.673 [2024-07-24 09:44:16.254051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:38.673 [2024-07-24 09:44:16.254063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.673 [2024-07-24 09:44:16.254086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:38.673 [2024-07-24 09:44:16.254095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.673 [2024-07-24 09:44:16.254116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:38.673 [2024-07-24 09:44:16.254127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:38.673 [2024-07-24 09:44:16.254136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.673 [2024-07-24 09:44:16.254149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:38.673 [2024-07-24 09:44:16.254158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:38.673 [2024-07-24 09:44:16.254169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:38.673 [2024-07-24 09:44:16.254199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:38.673 [2024-07-24 09:44:16.254209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254221] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:38.673 [2024-07-24 09:44:16.254230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:38.673 [2024-07-24 09:44:16.254243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.673 [2024-07-24 09:44:16.254253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.673 [2024-07-24 09:44:16.254265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:38.673 [2024-07-24 09:44:16.254274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:38.673 [2024-07-24 09:44:16.254286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:38.673 [2024-07-24 09:44:16.254295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:38.673 [2024-07-24 09:44:16.254307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:38.673 [2024-07-24 09:44:16.254316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:38.673 [2024-07-24 09:44:16.254332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:38.673 [2024-07-24 09:44:16.254346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:38.673 [2024-07-24 09:44:16.254370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:38.673 [2024-07-24 09:44:16.254383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:38.673 [2024-07-24 09:44:16.254394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:38.673 [2024-07-24 09:44:16.254406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:38.673 [2024-07-24 09:44:16.254416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:38.673 [2024-07-24 09:44:16.254428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:38.673 [2024-07-24 09:44:16.254438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:38.673 [2024-07-24 09:44:16.254451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:38.673 [2024-07-24 09:44:16.254461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:38.673 [2024-07-24 09:44:16.254523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:38.673 [2024-07-24 09:44:16.254538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:38.673 [2024-07-24 09:44:16.254583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:38.673 [2024-07-24 09:44:16.254599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:38.673 [2024-07-24 09:44:16.254610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:38.673 [2024-07-24 09:44:16.254629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.673 [2024-07-24 09:44:16.254639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:38.673 [2024-07-24 09:44:16.254656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:18:38.674 [2024-07-24 09:44:16.254666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.266423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.266470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.674 [2024-07-24 09:44:16.266488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.703 ms 00:18:38.674 [2024-07-24 09:44:16.266501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.266615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.266628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:38.674 [2024-07-24 09:44:16.266643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:38.674 [2024-07-24 09:44:16.266653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.277455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.277492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.674 [2024-07-24 09:44:16.277511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.793 ms 00:18:38.674 [2024-07-24 09:44:16.277522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.277591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.277604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.674 [2024-07-24 09:44:16.277625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:38.674 [2024-07-24 09:44:16.277636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.278061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.278081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.674 [2024-07-24 09:44:16.278094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:38.674 [2024-07-24 09:44:16.278114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.278240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.278253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.674 [2024-07-24 09:44:16.278268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:38.674 [2024-07-24 09:44:16.278279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.285234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.285269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.674 [2024-07-24 09:44:16.285284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:18:38.674 [2024-07-24 09:44:16.285294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.287816] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:38.674 [2024-07-24 09:44:16.287854] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:38.674 [2024-07-24 09:44:16.287872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.287883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:38.674 [2024-07-24 09:44:16.287896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:18:38.674 [2024-07-24 09:44:16.287905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.300639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.300681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:38.674 [2024-07-24 09:44:16.300698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.687 ms 00:18:38.674 [2024-07-24 09:44:16.300708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.302633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.302666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:38.674 [2024-07-24 09:44:16.302681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:18:38.674 [2024-07-24 09:44:16.302691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.304089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.304121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:38.674 [2024-07-24 09:44:16.304136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:18:38.674 [2024-07-24 09:44:16.304145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.304453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.304470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:38.674 [2024-07-24 09:44:16.304485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:18:38.674 [2024-07-24 09:44:16.304495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.333797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.333865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:38.674 [2024-07-24 09:44:16.333885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.318 ms 00:18:38.674 [2024-07-24 09:44:16.333898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.340156] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:38.674 [2024-07-24 09:44:16.355981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.356039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.674 [2024-07-24 09:44:16.356066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.004 ms 00:18:38.674 [2024-07-24 09:44:16.356080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.356185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.356215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:38.674 [2024-07-24 09:44:16.356226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:38.674 [2024-07-24 09:44:16.356239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.356293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.356306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.674 [2024-07-24 09:44:16.356317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:38.674 [2024-07-24 09:44:16.356328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.356361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.356381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.674 [2024-07-24 09:44:16.356390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.674 [2024-07-24 09:44:16.356406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.356439] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:38.674 [2024-07-24 09:44:16.356453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.356463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:38.674 [2024-07-24 09:44:16.356475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:38.674 [2024-07-24 09:44:16.356485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.360111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.360148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.674 [2024-07-24 09:44:16.360167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:18:38.674 [2024-07-24 09:44:16.360177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.360288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.674 [2024-07-24 09:44:16.360302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.674 [2024-07-24 09:44:16.360315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:38.674 [2024-07-24 09:44:16.360325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.674 [2024-07-24 09:44:16.361286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:38.674 [2024-07-24 09:44:16.362258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.809 ms, result 0 00:18:38.674 [2024-07-24 09:44:16.363160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.674 Some configs were skipped because the RPC state that can call them passed over. 00:18:38.674 09:44:16 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:38.933 [2024-07-24 09:44:16.617821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.933 [2024-07-24 09:44:16.617881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:38.933 [2024-07-24 09:44:16.617897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:18:38.933 [2024-07-24 09:44:16.617910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.933 [2024-07-24 09:44:16.617945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.586 ms, result 0 00:18:38.933 true 00:18:38.933 09:44:16 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:39.191 [2024-07-24 09:44:16.809541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.191 [2024-07-24 09:44:16.809598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:39.191 [2024-07-24 09:44:16.809615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:18:39.191 [2024-07-24 09:44:16.809625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.191 [2024-07-24 09:44:16.809665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.291 ms, result 0 00:18:39.191 true 00:18:39.191 09:44:16 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90483 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 90483 ']' 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 90483 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90483 00:18:39.191 killing process with pid 90483 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90483' 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 90483 00:18:39.191 09:44:16 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 90483 00:18:39.191 [2024-07-24 09:44:17.000630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.191 [2024-07-24 09:44:17.000698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.191 [2024-07-24 09:44:17.000713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:39.191 [2024-07-24 09:44:17.000726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.191 [2024-07-24 09:44:17.000751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:39.191 [2024-07-24 09:44:17.001421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.191 [2024-07-24 09:44:17.001442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.191 [2024-07-24 09:44:17.001455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:18:39.191 [2024-07-24 09:44:17.001465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.191 [2024-07-24 09:44:17.001757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.191 [2024-07-24 09:44:17.001777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.191 [2024-07-24 09:44:17.001790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:39.191 [2024-07-24 09:44:17.001800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.191 [2024-07-24 09:44:17.005302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.191 [2024-07-24 09:44:17.005342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.191 [2024-07-24 09:44:17.005359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:18:39.191 [2024-07-24 09:44:17.005369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.011126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.011164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.450 [2024-07-24 09:44:17.011181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.726 ms 00:18:39.450 [2024-07-24 09:44:17.011199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.012780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.012815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.450 [2024-07-24 09:44:17.012829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:18:39.450 [2024-07-24 09:44:17.012839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.016587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.016623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.450 [2024-07-24 09:44:17.016640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:18:39.450 [2024-07-24 09:44:17.016651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.016765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.016777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.450 [2024-07-24 09:44:17.016790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:39.450 [2024-07-24 09:44:17.016800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.019069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.019101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:39.450 [2024-07-24 09:44:17.019115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.248 ms 00:18:39.450 [2024-07-24 09:44:17.019125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.020424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.020456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:39.450 [2024-07-24 09:44:17.020472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:18:39.450 [2024-07-24 09:44:17.020482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.021638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.021670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.450 [2024-07-24 09:44:17.021684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:18:39.450 [2024-07-24 09:44:17.021693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.022930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.450 [2024-07-24 09:44:17.022964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.450 [2024-07-24 09:44:17.022978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:18:39.450 [2024-07-24 09:44:17.022988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.450 [2024-07-24 09:44:17.023020] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.450 [2024-07-24 09:44:17.023037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.450 [2024-07-24 09:44:17.023052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.450 [2024-07-24 09:44:17.023063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.450 [2024-07-24 09:44:17.023080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.023999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.451 [2024-07-24 09:44:17.024166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.452 [2024-07-24 09:44:17.024292] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.452 [2024-07-24 09:44:17.024307] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:39.452 [2024-07-24 09:44:17.024318] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.452 [2024-07-24 09:44:17.024330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.452 [2024-07-24 09:44:17.024340] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.452 [2024-07-24 09:44:17.024352] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.452 [2024-07-24 09:44:17.024362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.452 [2024-07-24 09:44:17.024373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.452 [2024-07-24 09:44:17.024383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.452 [2024-07-24 09:44:17.024395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.452 [2024-07-24 09:44:17.024404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.452 [2024-07-24 09:44:17.024416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.452 [2024-07-24 09:44:17.024426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.452 [2024-07-24 09:44:17.024439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:18:39.452 [2024-07-24 09:44:17.024448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.026213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.452 [2024-07-24 09:44:17.026244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.452 [2024-07-24 09:44:17.026259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:18:39.452 [2024-07-24 09:44:17.026268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.026374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.452 [2024-07-24 09:44:17.026385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.452 [2024-07-24 09:44:17.026398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:39.452 [2024-07-24 09:44:17.026410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.033364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.033393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.452 [2024-07-24 09:44:17.033415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.033425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.033510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.033522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.452 [2024-07-24 09:44:17.033535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.033555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.033606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.033619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.452 [2024-07-24 09:44:17.033632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.033641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.033664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.033674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.452 [2024-07-24 09:44:17.033686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.033696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.046471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.046518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.452 [2024-07-24 09:44:17.046534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.046545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.054706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.054750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.452 [2024-07-24 09:44:17.054766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.054786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.054837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.054849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.452 [2024-07-24 09:44:17.054862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.054872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.054913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.054925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.452 [2024-07-24 09:44:17.054938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.054948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.055035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.055047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.452 [2024-07-24 09:44:17.055060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.055070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.055108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.055120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.452 [2024-07-24 09:44:17.055132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.055142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.055242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.055259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.452 [2024-07-24 09:44:17.055272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.055288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.055340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.452 [2024-07-24 09:44:17.055351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.452 [2024-07-24 09:44:17.055363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.452 [2024-07-24 09:44:17.055373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.452 [2024-07-24 09:44:17.055512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.943 ms, result 0 00:18:39.710 09:44:17 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.710 [2024-07-24 09:44:17.394115] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:39.710 [2024-07-24 09:44:17.394305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90519 ] 00:18:39.968 [2024-07-24 09:44:17.560786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.968 [2024-07-24 09:44:17.604895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.968 [2024-07-24 09:44:17.705655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.968 [2024-07-24 09:44:17.705724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.228 [2024-07-24 09:44:17.863505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.228 [2024-07-24 09:44:17.863551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.228 [2024-07-24 09:44:17.863566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.228 [2024-07-24 09:44:17.863584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.228 [2024-07-24 09:44:17.865967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.228 [2024-07-24 09:44:17.866011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.228 [2024-07-24 09:44:17.866024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:18:40.228 [2024-07-24 09:44:17.866034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.228 [2024-07-24 09:44:17.866116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.228 [2024-07-24 09:44:17.866453] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.228 [2024-07-24 09:44:17.866481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.228 [2024-07-24 09:44:17.866492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.228 [2024-07-24 09:44:17.866512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:18:40.228 [2024-07-24 09:44:17.866529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.228 [2024-07-24 09:44:17.868007] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.228 [2024-07-24 09:44:17.870450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.228 [2024-07-24 09:44:17.870484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.228 [2024-07-24 09:44:17.870497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:18:40.228 [2024-07-24 09:44:17.870507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.228 [2024-07-24 09:44:17.870574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.228 [2024-07-24 09:44:17.870587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.229 [2024-07-24 09:44:17.870604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:40.229 [2024-07-24 09:44:17.870613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.877166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.877211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.229 [2024-07-24 09:44:17.877223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.523 ms 00:18:40.229 [2024-07-24 09:44:17.877233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.877347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.877365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.229 [2024-07-24 09:44:17.877379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:40.229 [2024-07-24 09:44:17.877389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.877425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.877436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.229 [2024-07-24 09:44:17.877446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.229 [2024-07-24 09:44:17.877455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.877477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.229 [2024-07-24 09:44:17.879056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.879083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.229 [2024-07-24 09:44:17.879102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:18:40.229 [2024-07-24 09:44:17.879112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.879159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.879177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.229 [2024-07-24 09:44:17.879203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:40.229 [2024-07-24 09:44:17.879223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.879248] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.229 [2024-07-24 09:44:17.879271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:40.229 [2024-07-24 09:44:17.879329] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.229 [2024-07-24 09:44:17.879358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:40.229 [2024-07-24 09:44:17.879442] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.229 [2024-07-24 09:44:17.879461] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.229 [2024-07-24 09:44:17.879483] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:40.229 [2024-07-24 09:44:17.879496] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.229 [2024-07-24 09:44:17.879507] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.229 [2024-07-24 09:44:17.879525] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.229 [2024-07-24 09:44:17.879538] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.229 [2024-07-24 09:44:17.879548] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.229 [2024-07-24 09:44:17.879558] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.229 [2024-07-24 09:44:17.879568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.879577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.229 [2024-07-24 09:44:17.879597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:40.229 [2024-07-24 09:44:17.879606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.879684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.229 [2024-07-24 09:44:17.879702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.229 [2024-07-24 09:44:17.879711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:40.229 [2024-07-24 09:44:17.879724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.229 [2024-07-24 09:44:17.879806] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.229 [2024-07-24 09:44:17.879818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.229 [2024-07-24 09:44:17.879829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.229 [2024-07-24 09:44:17.879839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.879849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.229 [2024-07-24 09:44:17.879858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.879867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.229 [2024-07-24 09:44:17.879876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.229 [2024-07-24 09:44:17.879895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.229 [2024-07-24 09:44:17.879907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.229 [2024-07-24 09:44:17.879917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.229 [2024-07-24 09:44:17.879926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.229 [2024-07-24 09:44:17.879935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.229 [2024-07-24 09:44:17.879945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.229 [2024-07-24 09:44:17.879955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:40.229 [2024-07-24 09:44:17.879964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.879974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.229 [2024-07-24 09:44:17.879983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:40.229 [2024-07-24 09:44:17.879992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.229 [2024-07-24 09:44:17.880010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.229 [2024-07-24 09:44:17.880037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.229 [2024-07-24 09:44:17.880070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.229 [2024-07-24 09:44:17.880097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.229 [2024-07-24 09:44:17.880123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.229 [2024-07-24 09:44:17.880141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.229 [2024-07-24 09:44:17.880149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:40.229 [2024-07-24 09:44:17.880158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.229 [2024-07-24 09:44:17.880167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.229 [2024-07-24 09:44:17.880176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:40.229 [2024-07-24 09:44:17.880185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.229 [2024-07-24 09:44:17.880216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:40.229 [2024-07-24 09:44:17.880226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880234] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.229 [2024-07-24 09:44:17.880244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.229 [2024-07-24 09:44:17.880255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.229 [2024-07-24 09:44:17.880278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.229 [2024-07-24 09:44:17.880288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.229 [2024-07-24 09:44:17.880297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.229 [2024-07-24 09:44:17.880306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.229 [2024-07-24 09:44:17.880318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.229 [2024-07-24 09:44:17.880328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.229 [2024-07-24 09:44:17.880341] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.229 [2024-07-24 09:44:17.880362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.229 [2024-07-24 09:44:17.880373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.229 [2024-07-24 09:44:17.880383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:40.230 [2024-07-24 09:44:17.880395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.230 [2024-07-24 09:44:17.880406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:40.230 [2024-07-24 09:44:17.880416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:40.230 [2024-07-24 09:44:17.880426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:40.230 [2024-07-24 09:44:17.880436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:40.230 [2024-07-24 09:44:17.880445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:40.230 [2024-07-24 09:44:17.880455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:40.230 [2024-07-24 09:44:17.880465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:40.230 [2024-07-24 09:44:17.880516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.230 [2024-07-24 09:44:17.880526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.230 [2024-07-24 09:44:17.880547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.230 [2024-07-24 09:44:17.880561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.230 [2024-07-24 09:44:17.880572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.230 [2024-07-24 09:44:17.880583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.880593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.230 [2024-07-24 09:44:17.880603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:18:40.230 [2024-07-24 09:44:17.880613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.901618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.901664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.230 [2024-07-24 09:44:17.901685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.983 ms 00:18:40.230 [2024-07-24 09:44:17.901699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.901843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.901871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.230 [2024-07-24 09:44:17.901885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:40.230 [2024-07-24 09:44:17.901899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.912917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.912952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.230 [2024-07-24 09:44:17.912968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.007 ms 00:18:40.230 [2024-07-24 09:44:17.912986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.913052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.913072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.230 [2024-07-24 09:44:17.913093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.230 [2024-07-24 09:44:17.913103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.913541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.913562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.230 [2024-07-24 09:44:17.913573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:18:40.230 [2024-07-24 09:44:17.913587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.913700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.913713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.230 [2024-07-24 09:44:17.913723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:40.230 [2024-07-24 09:44:17.913733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.920044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.920078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.230 [2024-07-24 09:44:17.920091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.299 ms 00:18:40.230 [2024-07-24 09:44:17.920105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.922704] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.230 [2024-07-24 09:44:17.922741] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.230 [2024-07-24 09:44:17.922755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.922766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.230 [2024-07-24 09:44:17.922776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:18:40.230 [2024-07-24 09:44:17.922786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.935274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.935327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.230 [2024-07-24 09:44:17.935357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:18:40.230 [2024-07-24 09:44:17.935372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.936876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.936908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.230 [2024-07-24 09:44:17.936920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:18:40.230 [2024-07-24 09:44:17.936929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.938326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.938356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.230 [2024-07-24 09:44:17.938367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:18:40.230 [2024-07-24 09:44:17.938377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.938652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.938675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.230 [2024-07-24 09:44:17.938686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:18:40.230 [2024-07-24 09:44:17.938695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.958732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.958801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.230 [2024-07-24 09:44:17.958825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.043 ms 00:18:40.230 [2024-07-24 09:44:17.958842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.965093] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:40.230 [2024-07-24 09:44:17.981183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.981233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.230 [2024-07-24 09:44:17.981249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.275 ms 00:18:40.230 [2024-07-24 09:44:17.981259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.981359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.981373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.230 [2024-07-24 09:44:17.981385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.230 [2024-07-24 09:44:17.981404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.981467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.981477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.230 [2024-07-24 09:44:17.981488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:40.230 [2024-07-24 09:44:17.981498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.981530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.981542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.230 [2024-07-24 09:44:17.981552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.230 [2024-07-24 09:44:17.981561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.981595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.230 [2024-07-24 09:44:17.981607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.981617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.230 [2024-07-24 09:44:17.981627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:40.230 [2024-07-24 09:44:17.981637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.985334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.230 [2024-07-24 09:44:17.985376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.230 [2024-07-24 09:44:17.985388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:18:40.230 [2024-07-24 09:44:17.985398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.230 [2024-07-24 09:44:17.985482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.231 [2024-07-24 09:44:17.985494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.231 [2024-07-24 09:44:17.985505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:40.231 [2024-07-24 09:44:17.985515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.231 [2024-07-24 09:44:17.986476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.231 [2024-07-24 09:44:17.987400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.883 ms, result 0 00:18:40.231 [2024-07-24 09:44:17.988101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:40.231 [2024-07-24 09:44:17.997888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.741  Copying: 28/256 [MB] (28 MBps) Copying: 54/256 [MB] (25 MBps) Copying: 79/256 [MB] (25 MBps) Copying: 105/256 [MB] (25 MBps) Copying: 130/256 [MB] (25 MBps) Copying: 156/256 [MB] (25 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 206/256 [MB] (24 MBps) Copying: 232/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-24 09:44:28.278537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:50.741 [2024-07-24 09:44:28.280121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.280160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:50.741 [2024-07-24 09:44:28.280180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:50.741 [2024-07-24 09:44:28.280215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.280257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:50.741 [2024-07-24 09:44:28.280933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.280963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:50.741 [2024-07-24 09:44:28.280978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:18:50.741 [2024-07-24 09:44:28.281005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.281326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.281359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:50.741 [2024-07-24 09:44:28.281373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:50.741 [2024-07-24 09:44:28.281386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.285615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.285651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:50.741 [2024-07-24 09:44:28.285666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:18:50.741 [2024-07-24 09:44:28.285679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.292973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.293017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:50.741 [2024-07-24 09:44:28.293029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.277 ms 00:18:50.741 [2024-07-24 09:44:28.293039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.294775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.294811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:50.741 [2024-07-24 09:44:28.294822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:18:50.741 [2024-07-24 09:44:28.294832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.299002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.741 [2024-07-24 09:44:28.299054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:50.741 [2024-07-24 09:44:28.299066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.139 ms 00:18:50.741 [2024-07-24 09:44:28.299084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.741 [2024-07-24 09:44:28.299226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.742 [2024-07-24 09:44:28.299239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:50.742 [2024-07-24 09:44:28.299250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:50.742 [2024-07-24 09:44:28.299260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.742 [2024-07-24 09:44:28.301748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.742 [2024-07-24 09:44:28.301986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:50.742 [2024-07-24 09:44:28.302000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.474 ms 00:18:50.742 [2024-07-24 09:44:28.302010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.742 [2024-07-24 09:44:28.303386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.742 [2024-07-24 09:44:28.303420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:50.742 [2024-07-24 09:44:28.303431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:18:50.742 [2024-07-24 09:44:28.303440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.742 [2024-07-24 09:44:28.304672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.742 [2024-07-24 09:44:28.304705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:50.742 [2024-07-24 09:44:28.304717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:18:50.742 [2024-07-24 09:44:28.304726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.742 [2024-07-24 09:44:28.305766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.742 [2024-07-24 09:44:28.305802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:50.742 [2024-07-24 09:44:28.305813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:18:50.742 [2024-07-24 09:44:28.305823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.742 [2024-07-24 09:44:28.305850] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:50.742 [2024-07-24 09:44:28.305867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.305995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:50.742 [2024-07-24 09:44:28.306610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:50.743 [2024-07-24 09:44:28.306925] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:50.743 [2024-07-24 09:44:28.306935] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06ad3162-afda-493d-8a37-9594fb773b68 00:18:50.743 [2024-07-24 09:44:28.306957] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:50.743 [2024-07-24 09:44:28.306966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:50.743 [2024-07-24 09:44:28.306980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:50.743 [2024-07-24 09:44:28.306989] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:50.743 [2024-07-24 09:44:28.307003] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:50.743 [2024-07-24 09:44:28.307024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:50.743 [2024-07-24 09:44:28.307033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:50.743 [2024-07-24 09:44:28.307043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:50.743 [2024-07-24 09:44:28.307052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:50.743 [2024-07-24 09:44:28.307062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.743 [2024-07-24 09:44:28.307072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:50.743 [2024-07-24 09:44:28.307082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:18:50.743 [2024-07-24 09:44:28.307091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.308996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.743 [2024-07-24 09:44:28.309024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:50.743 [2024-07-24 09:44:28.309043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:18:50.743 [2024-07-24 09:44:28.309053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.309167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.743 [2024-07-24 09:44:28.309185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:50.743 [2024-07-24 09:44:28.309205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:50.743 [2024-07-24 09:44:28.309216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.315608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.315633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:50.743 [2024-07-24 09:44:28.315645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.315656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.315723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.315735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:50.743 [2024-07-24 09:44:28.315745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.315755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.315797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.315814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:50.743 [2024-07-24 09:44:28.315824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.315833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.315851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.315862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:50.743 [2024-07-24 09:44:28.315871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.315881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.328795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.328849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:50.743 [2024-07-24 09:44:28.328862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.328872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:50.743 [2024-07-24 09:44:28.337407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:50.743 [2024-07-24 09:44:28.337481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:50.743 [2024-07-24 09:44:28.337541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:50.743 [2024-07-24 09:44:28.337647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:50.743 [2024-07-24 09:44:28.337727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:50.743 [2024-07-24 09:44:28.337797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.337855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.743 [2024-07-24 09:44:28.337867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:50.743 [2024-07-24 09:44:28.337877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.743 [2024-07-24 09:44:28.337886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.743 [2024-07-24 09:44:28.338032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.973 ms, result 0 00:18:50.743 00:18:50.743 00:18:51.002 09:44:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:51.262 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:51.262 09:44:29 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:51.262 09:44:29 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:51.262 09:44:29 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:51.262 09:44:29 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:51.262 09:44:29 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:51.520 09:44:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:51.520 09:44:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90483 00:18:51.520 09:44:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 90483 ']' 00:18:51.520 09:44:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 90483 00:18:51.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90483) - No such process 00:18:51.520 Process with pid 90483 is not found 00:18:51.520 09:44:29 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 90483 is not found' 00:18:51.520 00:18:51.520 real 0m52.554s 00:18:51.520 user 1m13.078s 00:18:51.520 sys 0m6.133s 00:18:51.520 09:44:29 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.520 ************************************ 00:18:51.520 END TEST ftl_trim 00:18:51.520 ************************************ 00:18:51.520 09:44:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:51.520 09:44:29 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:51.520 09:44:29 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:51.520 09:44:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.520 09:44:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:51.520 ************************************ 00:18:51.520 START TEST ftl_restore 00:18:51.520 ************************************ 00:18:51.520 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:51.520 * Looking for test storage... 00:18:51.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.520 09:44:29 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mIfySkQFgj 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90703 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.780 09:44:29 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90703 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 90703 ']' 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.780 09:44:29 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:51.780 [2024-07-24 09:44:29.473377] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:18:51.780 [2024-07-24 09:44:29.473495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90703 ] 00:18:52.039 [2024-07-24 09:44:29.641701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.039 [2024-07-24 09:44:29.682449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.606 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.606 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:52.606 09:44:30 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:52.865 09:44:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:52.865 09:44:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:52.865 09:44:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:52.865 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:52.865 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.865 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:52.865 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:52.865 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.124 { 00:18:53.124 "name": "nvme0n1", 00:18:53.124 "aliases": [ 00:18:53.124 "75ebc504-5e8c-400a-bfa6-f4471ea8f897" 00:18:53.124 ], 00:18:53.124 "product_name": "NVMe disk", 00:18:53.124 "block_size": 4096, 00:18:53.124 "num_blocks": 1310720, 00:18:53.124 "uuid": "75ebc504-5e8c-400a-bfa6-f4471ea8f897", 00:18:53.124 "assigned_rate_limits": { 00:18:53.124 "rw_ios_per_sec": 0, 00:18:53.124 "rw_mbytes_per_sec": 0, 00:18:53.124 "r_mbytes_per_sec": 0, 00:18:53.124 "w_mbytes_per_sec": 0 00:18:53.124 }, 00:18:53.124 "claimed": true, 00:18:53.124 "claim_type": "read_many_write_one", 00:18:53.124 "zoned": false, 00:18:53.124 "supported_io_types": { 00:18:53.124 "read": true, 00:18:53.124 "write": true, 00:18:53.124 "unmap": true, 00:18:53.124 "flush": true, 00:18:53.124 "reset": true, 00:18:53.124 "nvme_admin": true, 00:18:53.124 "nvme_io": true, 00:18:53.124 "nvme_io_md": false, 00:18:53.124 "write_zeroes": true, 00:18:53.124 "zcopy": false, 00:18:53.124 "get_zone_info": false, 00:18:53.124 "zone_management": false, 00:18:53.124 "zone_append": false, 00:18:53.124 "compare": true, 00:18:53.124 "compare_and_write": false, 00:18:53.124 "abort": true, 00:18:53.124 "seek_hole": false, 00:18:53.124 "seek_data": false, 00:18:53.124 "copy": true, 00:18:53.124 "nvme_iov_md": false 00:18:53.124 }, 00:18:53.124 "driver_specific": { 00:18:53.124 "nvme": [ 00:18:53.124 { 00:18:53.124 "pci_address": "0000:00:11.0", 00:18:53.124 "trid": { 00:18:53.124 "trtype": "PCIe", 00:18:53.124 "traddr": "0000:00:11.0" 00:18:53.124 }, 00:18:53.124 "ctrlr_data": { 00:18:53.124 "cntlid": 0, 00:18:53.124 "vendor_id": "0x1b36", 00:18:53.124 "model_number": "QEMU NVMe Ctrl", 00:18:53.124 "serial_number": "12341", 00:18:53.124 "firmware_revision": "8.0.0", 00:18:53.124 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:53.124 "oacs": { 00:18:53.124 "security": 0, 00:18:53.124 "format": 1, 00:18:53.124 "firmware": 0, 00:18:53.124 "ns_manage": 1 00:18:53.124 }, 00:18:53.124 "multi_ctrlr": false, 00:18:53.124 "ana_reporting": false 00:18:53.124 }, 00:18:53.124 "vs": { 00:18:53.124 "nvme_version": "1.4" 00:18:53.124 }, 00:18:53.124 "ns_data": { 00:18:53.124 "id": 1, 00:18:53.124 "can_share": false 00:18:53.124 } 00:18:53.124 } 00:18:53.124 ], 00:18:53.124 "mp_policy": "active_passive" 00:18:53.124 } 00:18:53.124 } 00:18:53.124 ]' 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:53.124 09:44:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:53.124 09:44:30 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:53.124 09:44:30 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:53.124 09:44:30 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:53.124 09:44:30 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:53.124 09:44:30 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:53.383 09:44:30 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1101f753-a32a-4878-8eb1-76677ef6fae4 00:18:53.383 09:44:30 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:53.383 09:44:30 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1101f753-a32a-4878-8eb1-76677ef6fae4 00:18:53.383 09:44:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:53.642 09:44:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=08b6f515-839d-46ef-bb5d-d111435ee391 00:18:53.642 09:44:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08b6f515-839d-46ef-bb5d-d111435ee391 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=24c79d73-299b-4a0a-9840-cf4a83433857 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=24c79d73-299b-4a0a-9840-cf4a83433857 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:53.899 09:44:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:53.899 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=24c79d73-299b-4a0a-9840-cf4a83433857 00:18:53.899 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.899 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:53.899 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:53.899 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.157 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:54.157 { 00:18:54.157 "name": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:54.157 "aliases": [ 00:18:54.157 "lvs/nvme0n1p0" 00:18:54.157 ], 00:18:54.157 "product_name": "Logical Volume", 00:18:54.157 "block_size": 4096, 00:18:54.157 "num_blocks": 26476544, 00:18:54.157 "uuid": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:54.157 "assigned_rate_limits": { 00:18:54.157 "rw_ios_per_sec": 0, 00:18:54.157 "rw_mbytes_per_sec": 0, 00:18:54.157 "r_mbytes_per_sec": 0, 00:18:54.157 "w_mbytes_per_sec": 0 00:18:54.157 }, 00:18:54.157 "claimed": false, 00:18:54.157 "zoned": false, 00:18:54.157 "supported_io_types": { 00:18:54.157 "read": true, 00:18:54.157 "write": true, 00:18:54.157 "unmap": true, 00:18:54.157 "flush": false, 00:18:54.157 "reset": true, 00:18:54.157 "nvme_admin": false, 00:18:54.157 "nvme_io": false, 00:18:54.157 "nvme_io_md": false, 00:18:54.158 "write_zeroes": true, 00:18:54.158 "zcopy": false, 00:18:54.158 "get_zone_info": false, 00:18:54.158 "zone_management": false, 00:18:54.158 "zone_append": false, 00:18:54.158 "compare": false, 00:18:54.158 "compare_and_write": false, 00:18:54.158 "abort": false, 00:18:54.158 "seek_hole": true, 00:18:54.158 "seek_data": true, 00:18:54.158 "copy": false, 00:18:54.158 "nvme_iov_md": false 00:18:54.158 }, 00:18:54.158 "driver_specific": { 00:18:54.158 "lvol": { 00:18:54.158 "lvol_store_uuid": "08b6f515-839d-46ef-bb5d-d111435ee391", 00:18:54.158 "base_bdev": "nvme0n1", 00:18:54.158 "thin_provision": true, 00:18:54.158 "num_allocated_clusters": 0, 00:18:54.158 "snapshot": false, 00:18:54.158 "clone": false, 00:18:54.158 "esnap_clone": false 00:18:54.158 } 00:18:54.158 } 00:18:54.158 } 00:18:54.158 ]' 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:54.158 09:44:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:54.158 09:44:31 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:54.158 09:44:31 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:54.158 09:44:31 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:54.416 09:44:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:54.416 09:44:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:54.416 09:44:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.416 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.416 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:54.416 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:54.416 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:54.416 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.674 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:54.674 { 00:18:54.674 "name": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:54.674 "aliases": [ 00:18:54.674 "lvs/nvme0n1p0" 00:18:54.674 ], 00:18:54.674 "product_name": "Logical Volume", 00:18:54.674 "block_size": 4096, 00:18:54.674 "num_blocks": 26476544, 00:18:54.674 "uuid": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:54.674 "assigned_rate_limits": { 00:18:54.674 "rw_ios_per_sec": 0, 00:18:54.674 "rw_mbytes_per_sec": 0, 00:18:54.674 "r_mbytes_per_sec": 0, 00:18:54.674 "w_mbytes_per_sec": 0 00:18:54.674 }, 00:18:54.674 "claimed": false, 00:18:54.674 "zoned": false, 00:18:54.674 "supported_io_types": { 00:18:54.674 "read": true, 00:18:54.674 "write": true, 00:18:54.674 "unmap": true, 00:18:54.674 "flush": false, 00:18:54.674 "reset": true, 00:18:54.674 "nvme_admin": false, 00:18:54.674 "nvme_io": false, 00:18:54.674 "nvme_io_md": false, 00:18:54.674 "write_zeroes": true, 00:18:54.674 "zcopy": false, 00:18:54.674 "get_zone_info": false, 00:18:54.674 "zone_management": false, 00:18:54.674 "zone_append": false, 00:18:54.674 "compare": false, 00:18:54.674 "compare_and_write": false, 00:18:54.674 "abort": false, 00:18:54.674 "seek_hole": true, 00:18:54.674 "seek_data": true, 00:18:54.674 "copy": false, 00:18:54.674 "nvme_iov_md": false 00:18:54.674 }, 00:18:54.675 "driver_specific": { 00:18:54.675 "lvol": { 00:18:54.675 "lvol_store_uuid": "08b6f515-839d-46ef-bb5d-d111435ee391", 00:18:54.675 "base_bdev": "nvme0n1", 00:18:54.675 "thin_provision": true, 00:18:54.675 "num_allocated_clusters": 0, 00:18:54.675 "snapshot": false, 00:18:54.675 "clone": false, 00:18:54.675 "esnap_clone": false 00:18:54.675 } 00:18:54.675 } 00:18:54.675 } 00:18:54.675 ]' 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:54.675 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:54.675 09:44:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:54.675 09:44:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:54.933 09:44:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:54.933 09:44:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.933 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=24c79d73-299b-4a0a-9840-cf4a83433857 00:18:54.933 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:54.933 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:54.933 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:54.933 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24c79d73-299b-4a0a-9840-cf4a83433857 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:55.192 { 00:18:55.192 "name": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:55.192 "aliases": [ 00:18:55.192 "lvs/nvme0n1p0" 00:18:55.192 ], 00:18:55.192 "product_name": "Logical Volume", 00:18:55.192 "block_size": 4096, 00:18:55.192 "num_blocks": 26476544, 00:18:55.192 "uuid": "24c79d73-299b-4a0a-9840-cf4a83433857", 00:18:55.192 "assigned_rate_limits": { 00:18:55.192 "rw_ios_per_sec": 0, 00:18:55.192 "rw_mbytes_per_sec": 0, 00:18:55.192 "r_mbytes_per_sec": 0, 00:18:55.192 "w_mbytes_per_sec": 0 00:18:55.192 }, 00:18:55.192 "claimed": false, 00:18:55.192 "zoned": false, 00:18:55.192 "supported_io_types": { 00:18:55.192 "read": true, 00:18:55.192 "write": true, 00:18:55.192 "unmap": true, 00:18:55.192 "flush": false, 00:18:55.192 "reset": true, 00:18:55.192 "nvme_admin": false, 00:18:55.192 "nvme_io": false, 00:18:55.192 "nvme_io_md": false, 00:18:55.192 "write_zeroes": true, 00:18:55.192 "zcopy": false, 00:18:55.192 "get_zone_info": false, 00:18:55.192 "zone_management": false, 00:18:55.192 "zone_append": false, 00:18:55.192 "compare": false, 00:18:55.192 "compare_and_write": false, 00:18:55.192 "abort": false, 00:18:55.192 "seek_hole": true, 00:18:55.192 "seek_data": true, 00:18:55.192 "copy": false, 00:18:55.192 "nvme_iov_md": false 00:18:55.192 }, 00:18:55.192 "driver_specific": { 00:18:55.192 "lvol": { 00:18:55.192 "lvol_store_uuid": "08b6f515-839d-46ef-bb5d-d111435ee391", 00:18:55.192 "base_bdev": "nvme0n1", 00:18:55.192 "thin_provision": true, 00:18:55.192 "num_allocated_clusters": 0, 00:18:55.192 "snapshot": false, 00:18:55.192 "clone": false, 00:18:55.192 "esnap_clone": false 00:18:55.192 } 00:18:55.192 } 00:18:55.192 } 00:18:55.192 ]' 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:55.192 09:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 24c79d73-299b-4a0a-9840-cf4a83433857 --l2p_dram_limit 10' 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:55.192 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:55.192 09:44:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 24c79d73-299b-4a0a-9840-cf4a83433857 --l2p_dram_limit 10 -c nvc0n1p0 00:18:55.452 [2024-07-24 09:44:33.011091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.452 [2024-07-24 09:44:33.011151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:55.452 [2024-07-24 09:44:33.011171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:55.452 [2024-07-24 09:44:33.011182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.452 [2024-07-24 09:44:33.011278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.452 [2024-07-24 09:44:33.011294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:55.452 [2024-07-24 09:44:33.011307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:55.452 [2024-07-24 09:44:33.011316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.452 [2024-07-24 09:44:33.011344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:55.452 [2024-07-24 09:44:33.011620] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:55.452 [2024-07-24 09:44:33.011641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.452 [2024-07-24 09:44:33.011651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:55.452 [2024-07-24 09:44:33.011673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:55.452 [2024-07-24 09:44:33.011684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.452 [2024-07-24 09:44:33.011725] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6d23807e-12d1-42c4-9707-0f26913c394b 00:18:55.452 [2024-07-24 09:44:33.013116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.452 [2024-07-24 09:44:33.013153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:55.452 [2024-07-24 09:44:33.013165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:55.453 [2024-07-24 09:44:33.013178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.020624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.020656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:55.453 [2024-07-24 09:44:33.020691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.374 ms 00:18:55.453 [2024-07-24 09:44:33.020708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.020788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.020811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:55.453 [2024-07-24 09:44:33.020821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:55.453 [2024-07-24 09:44:33.020834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.020896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.020912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:55.453 [2024-07-24 09:44:33.020923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:55.453 [2024-07-24 09:44:33.020935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.020966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.453 [2024-07-24 09:44:33.022969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.023100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:55.453 [2024-07-24 09:44:33.023181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:18:55.453 [2024-07-24 09:44:33.023242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.023307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.023363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:55.453 [2024-07-24 09:44:33.023404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:55.453 [2024-07-24 09:44:33.023433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.023487] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:55.453 [2024-07-24 09:44:33.023668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:55.453 [2024-07-24 09:44:33.023812] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:55.453 [2024-07-24 09:44:33.023893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:55.453 [2024-07-24 09:44:33.023912] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:55.453 [2024-07-24 09:44:33.023926] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:55.453 [2024-07-24 09:44:33.023942] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:55.453 [2024-07-24 09:44:33.023953] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:55.453 [2024-07-24 09:44:33.023965] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:55.453 [2024-07-24 09:44:33.023975] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:55.453 [2024-07-24 09:44:33.023998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.024009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:55.453 [2024-07-24 09:44:33.024022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:18:55.453 [2024-07-24 09:44:33.024032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.024111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.453 [2024-07-24 09:44:33.024121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:55.453 [2024-07-24 09:44:33.024140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:55.453 [2024-07-24 09:44:33.024150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.453 [2024-07-24 09:44:33.024251] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:55.453 [2024-07-24 09:44:33.024265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:55.453 [2024-07-24 09:44:33.024278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:55.453 [2024-07-24 09:44:33.024328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:55.453 [2024-07-24 09:44:33.024360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.453 [2024-07-24 09:44:33.024380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:55.453 [2024-07-24 09:44:33.024389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:55.453 [2024-07-24 09:44:33.024400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.453 [2024-07-24 09:44:33.024409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:55.453 [2024-07-24 09:44:33.024423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:55.453 [2024-07-24 09:44:33.024432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:55.453 [2024-07-24 09:44:33.024453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:55.453 [2024-07-24 09:44:33.024485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:55.453 [2024-07-24 09:44:33.024516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:55.453 [2024-07-24 09:44:33.024547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:55.453 [2024-07-24 09:44:33.024576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:55.453 [2024-07-24 09:44:33.024610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.453 [2024-07-24 09:44:33.024630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:55.453 [2024-07-24 09:44:33.024639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:55.453 [2024-07-24 09:44:33.024650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.453 [2024-07-24 09:44:33.024659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:55.453 [2024-07-24 09:44:33.024670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:55.453 [2024-07-24 09:44:33.024679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:55.453 [2024-07-24 09:44:33.024699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:55.453 [2024-07-24 09:44:33.024710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024719] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:55.453 [2024-07-24 09:44:33.024731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:55.453 [2024-07-24 09:44:33.024741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.453 [2024-07-24 09:44:33.024769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:55.453 [2024-07-24 09:44:33.024781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:55.453 [2024-07-24 09:44:33.024791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:55.453 [2024-07-24 09:44:33.024802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:55.453 [2024-07-24 09:44:33.024812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:55.453 [2024-07-24 09:44:33.024824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:55.453 [2024-07-24 09:44:33.024838] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:55.453 [2024-07-24 09:44:33.024853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.453 [2024-07-24 09:44:33.024864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:55.453 [2024-07-24 09:44:33.024879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:55.453 [2024-07-24 09:44:33.024890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:55.453 [2024-07-24 09:44:33.024902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:55.453 [2024-07-24 09:44:33.024912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:55.453 [2024-07-24 09:44:33.024925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:55.453 [2024-07-24 09:44:33.024935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:55.454 [2024-07-24 09:44:33.024950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:55.454 [2024-07-24 09:44:33.024960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:55.454 [2024-07-24 09:44:33.024973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.024983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.024995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.025005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.025018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:55.454 [2024-07-24 09:44:33.025028] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:55.454 [2024-07-24 09:44:33.025041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.025052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:55.454 [2024-07-24 09:44:33.025066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:55.454 [2024-07-24 09:44:33.025076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:55.454 [2024-07-24 09:44:33.025260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:55.454 [2024-07-24 09:44:33.025272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.454 [2024-07-24 09:44:33.025285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:55.454 [2024-07-24 09:44:33.025295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:18:55.454 [2024-07-24 09:44:33.025309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.454 [2024-07-24 09:44:33.025370] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:55.454 [2024-07-24 09:44:33.025385] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:58.737 [2024-07-24 09:44:36.466788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.467033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:58.737 [2024-07-24 09:44:36.467133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3447.006 ms 00:18:58.737 [2024-07-24 09:44:36.467177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.478492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.478680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.737 [2024-07-24 09:44:36.478780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.191 ms 00:18:58.737 [2024-07-24 09:44:36.478827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.478984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.479145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:58.737 [2024-07-24 09:44:36.479221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:58.737 [2024-07-24 09:44:36.479259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.490413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.490628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.737 [2024-07-24 09:44:36.490725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.060 ms 00:18:58.737 [2024-07-24 09:44:36.490779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.490836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.490874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.737 [2024-07-24 09:44:36.490908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:58.737 [2024-07-24 09:44:36.490943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.491465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.491521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.737 [2024-07-24 09:44:36.491582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:18:58.737 [2024-07-24 09:44:36.491717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.737 [2024-07-24 09:44:36.491854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.737 [2024-07-24 09:44:36.491975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.737 [2024-07-24 09:44:36.492017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:58.738 [2024-07-24 09:44:36.492053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.738 [2024-07-24 09:44:36.499824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.738 [2024-07-24 09:44:36.499976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.738 [2024-07-24 09:44:36.500130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.734 ms 00:18:58.738 [2024-07-24 09:44:36.500176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.738 [2024-07-24 09:44:36.508371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:58.738 [2024-07-24 09:44:36.511736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.738 [2024-07-24 09:44:36.511857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:58.738 [2024-07-24 09:44:36.511950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.464 ms 00:18:58.738 [2024-07-24 09:44:36.511987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.598750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.598971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:58.994 [2024-07-24 09:44:36.599075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.846 ms 00:18:58.994 [2024-07-24 09:44:36.599091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.599342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.599357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:58.994 [2024-07-24 09:44:36.599380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:18:58.994 [2024-07-24 09:44:36.599390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.603009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.603046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:58.994 [2024-07-24 09:44:36.603065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:18:58.994 [2024-07-24 09:44:36.603082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.605960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.605996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:58.994 [2024-07-24 09:44:36.606012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:18:58.994 [2024-07-24 09:44:36.606021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.606313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.606335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:58.994 [2024-07-24 09:44:36.606349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:18:58.994 [2024-07-24 09:44:36.606359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.645702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.645754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:58.994 [2024-07-24 09:44:36.645772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.375 ms 00:18:58.994 [2024-07-24 09:44:36.645791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.650281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.650319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:58.994 [2024-07-24 09:44:36.650336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.455 ms 00:18:58.994 [2024-07-24 09:44:36.650347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.653639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.653673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:58.994 [2024-07-24 09:44:36.653688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:18:58.994 [2024-07-24 09:44:36.653697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.657365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.657398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:58.994 [2024-07-24 09:44:36.657414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.631 ms 00:18:58.994 [2024-07-24 09:44:36.657424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.657470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.657482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:58.994 [2024-07-24 09:44:36.657495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:58.994 [2024-07-24 09:44:36.657505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.994 [2024-07-24 09:44:36.657570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.994 [2024-07-24 09:44:36.657589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:58.994 [2024-07-24 09:44:36.657606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:58.995 [2024-07-24 09:44:36.657616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.995 [2024-07-24 09:44:36.658629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3653.023 ms, result 0 00:18:58.995 { 00:18:58.995 "name": "ftl0", 00:18:58.995 "uuid": "6d23807e-12d1-42c4-9707-0f26913c394b" 00:18:58.995 } 00:18:58.995 09:44:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:58.995 09:44:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:59.252 09:44:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:59.252 09:44:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:59.252 [2024-07-24 09:44:37.054651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.054708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.252 [2024-07-24 09:44:37.054725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.252 [2024-07-24 09:44:37.054738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.252 [2024-07-24 09:44:37.054763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.252 [2024-07-24 09:44:37.055460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.055473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.252 [2024-07-24 09:44:37.055492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:18:59.252 [2024-07-24 09:44:37.055502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.252 [2024-07-24 09:44:37.055721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.055737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.252 [2024-07-24 09:44:37.055754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:59.252 [2024-07-24 09:44:37.055763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.252 [2024-07-24 09:44:37.058268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.058291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.252 [2024-07-24 09:44:37.058305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:18:59.252 [2024-07-24 09:44:37.058316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.252 [2024-07-24 09:44:37.063469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.063501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:59.252 [2024-07-24 09:44:37.063516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.128 ms 00:18:59.252 [2024-07-24 09:44:37.063530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.252 [2024-07-24 09:44:37.065044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.252 [2024-07-24 09:44:37.065084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.252 [2024-07-24 09:44:37.065102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:18:59.252 [2024-07-24 09:44:37.065111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.069561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.069613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.511 [2024-07-24 09:44:37.069633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.410 ms 00:18:59.511 [2024-07-24 09:44:37.069644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.069758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.069774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.511 [2024-07-24 09:44:37.069795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:59.511 [2024-07-24 09:44:37.069805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.071778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.071813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:59.511 [2024-07-24 09:44:37.071827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:18:59.511 [2024-07-24 09:44:37.071838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.073265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.073297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:59.511 [2024-07-24 09:44:37.073314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:18:59.511 [2024-07-24 09:44:37.073323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.074615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.074648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.511 [2024-07-24 09:44:37.074661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:18:59.511 [2024-07-24 09:44:37.074670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.075877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.511 [2024-07-24 09:44:37.075909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.511 [2024-07-24 09:44:37.075926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:18:59.511 [2024-07-24 09:44:37.075935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.511 [2024-07-24 09:44:37.075966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.511 [2024-07-24 09:44:37.075983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.075998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.511 [2024-07-24 09:44:37.076424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.076992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.512 [2024-07-24 09:44:37.077241] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.512 [2024-07-24 09:44:37.077265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d23807e-12d1-42c4-9707-0f26913c394b 00:18:59.512 [2024-07-24 09:44:37.077276] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.512 [2024-07-24 09:44:37.077288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.512 [2024-07-24 09:44:37.077297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.512 [2024-07-24 09:44:37.077310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.512 [2024-07-24 09:44:37.077323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.512 [2024-07-24 09:44:37.077335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.512 [2024-07-24 09:44:37.077345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.512 [2024-07-24 09:44:37.077356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.512 [2024-07-24 09:44:37.077365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.512 [2024-07-24 09:44:37.077379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.512 [2024-07-24 09:44:37.077395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.512 [2024-07-24 09:44:37.077409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:18:59.512 [2024-07-24 09:44:37.077418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.512 [2024-07-24 09:44:37.079170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.512 [2024-07-24 09:44:37.079205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.512 [2024-07-24 09:44:37.079225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:18:59.512 [2024-07-24 09:44:37.079235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.512 [2024-07-24 09:44:37.079341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.512 [2024-07-24 09:44:37.079352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.512 [2024-07-24 09:44:37.079365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:59.512 [2024-07-24 09:44:37.079374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.512 [2024-07-24 09:44:37.086323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.512 [2024-07-24 09:44:37.086355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.512 [2024-07-24 09:44:37.086370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.512 [2024-07-24 09:44:37.086380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.512 [2024-07-24 09:44:37.086433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.512 [2024-07-24 09:44:37.086444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.513 [2024-07-24 09:44:37.086456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.086466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.086526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.086539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.513 [2024-07-24 09:44:37.086557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.086567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.086588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.086599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.513 [2024-07-24 09:44:37.086611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.086620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.098526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.098568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.513 [2024-07-24 09:44:37.098588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.098598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.106724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.106759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.513 [2024-07-24 09:44:37.106775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.106785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.106862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.106875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.513 [2024-07-24 09:44:37.106891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.106904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.106952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.106963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.513 [2024-07-24 09:44:37.106975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.106985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.107062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.107074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.513 [2024-07-24 09:44:37.107087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.107096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.107140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.107152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.513 [2024-07-24 09:44:37.107164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.107174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.107249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.107261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.513 [2024-07-24 09:44:37.107276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.107286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.107337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.513 [2024-07-24 09:44:37.107348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.513 [2024-07-24 09:44:37.107360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.513 [2024-07-24 09:44:37.107370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.513 [2024-07-24 09:44:37.107507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.896 ms, result 0 00:18:59.513 true 00:18:59.513 09:44:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90703 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 90703 ']' 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 90703 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90703 00:18:59.513 killing process with pid 90703 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90703' 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 90703 00:18:59.513 09:44:37 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 90703 00:19:02.795 09:44:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:07.002 262144+0 records in 00:19:07.002 262144+0 records out 00:19:07.002 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.11836 s, 261 MB/s 00:19:07.002 09:44:44 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:08.377 09:44:46 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:08.377 [2024-07-24 09:44:46.133052] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:19:08.377 [2024-07-24 09:44:46.133216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90905 ] 00:19:08.635 [2024-07-24 09:44:46.304923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.635 [2024-07-24 09:44:46.346331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.635 [2024-07-24 09:44:46.448533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:08.635 [2024-07-24 09:44:46.448609] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:08.895 [2024-07-24 09:44:46.607861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.607923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:08.895 [2024-07-24 09:44:46.607939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:08.895 [2024-07-24 09:44:46.607949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.608000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.608015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.895 [2024-07-24 09:44:46.608035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:08.895 [2024-07-24 09:44:46.608046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.608067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:08.895 [2024-07-24 09:44:46.608361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:08.895 [2024-07-24 09:44:46.608383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.608393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.895 [2024-07-24 09:44:46.608404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:08.895 [2024-07-24 09:44:46.608417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.609854] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:08.895 [2024-07-24 09:44:46.612443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.612482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:08.895 [2024-07-24 09:44:46.612496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:19:08.895 [2024-07-24 09:44:46.612506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.612564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.612577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:08.895 [2024-07-24 09:44:46.612588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:08.895 [2024-07-24 09:44:46.612597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.619384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.619413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.895 [2024-07-24 09:44:46.619424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.731 ms 00:19:08.895 [2024-07-24 09:44:46.619442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.619538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.619550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.895 [2024-07-24 09:44:46.619564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:08.895 [2024-07-24 09:44:46.619574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.619634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.619648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:08.895 [2024-07-24 09:44:46.619661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:08.895 [2024-07-24 09:44:46.619671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.619695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.895 [2024-07-24 09:44:46.621378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.621406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.895 [2024-07-24 09:44:46.621418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:19:08.895 [2024-07-24 09:44:46.621427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.621463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.621474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:08.895 [2024-07-24 09:44:46.621484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:08.895 [2024-07-24 09:44:46.621493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.621515] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:08.895 [2024-07-24 09:44:46.621538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:08.895 [2024-07-24 09:44:46.621590] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:08.895 [2024-07-24 09:44:46.621615] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:08.895 [2024-07-24 09:44:46.621698] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:08.895 [2024-07-24 09:44:46.621711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:08.895 [2024-07-24 09:44:46.621723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:08.895 [2024-07-24 09:44:46.621736] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:08.895 [2024-07-24 09:44:46.621747] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:08.895 [2024-07-24 09:44:46.621765] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:08.895 [2024-07-24 09:44:46.621775] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:08.895 [2024-07-24 09:44:46.621785] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:08.895 [2024-07-24 09:44:46.621794] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:08.895 [2024-07-24 09:44:46.621804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.621817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:08.895 [2024-07-24 09:44:46.621827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:08.895 [2024-07-24 09:44:46.621836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.621909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.895 [2024-07-24 09:44:46.621920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:08.895 [2024-07-24 09:44:46.621932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:08.895 [2024-07-24 09:44:46.621942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.895 [2024-07-24 09:44:46.622022] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:08.895 [2024-07-24 09:44:46.622041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:08.895 [2024-07-24 09:44:46.622063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.895 [2024-07-24 09:44:46.622073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.895 [2024-07-24 09:44:46.622084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:08.895 [2024-07-24 09:44:46.622093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:08.895 [2024-07-24 09:44:46.622102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:08.895 [2024-07-24 09:44:46.622112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:08.895 [2024-07-24 09:44:46.622121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:08.895 [2024-07-24 09:44:46.622130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.895 [2024-07-24 09:44:46.622149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:08.895 [2024-07-24 09:44:46.622158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:08.895 [2024-07-24 09:44:46.622167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.895 [2024-07-24 09:44:46.622177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:08.895 [2024-07-24 09:44:46.622208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:08.895 [2024-07-24 09:44:46.622219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.895 [2024-07-24 09:44:46.622228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:08.895 [2024-07-24 09:44:46.622237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:08.895 [2024-07-24 09:44:46.622246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.895 [2024-07-24 09:44:46.622256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:08.895 [2024-07-24 09:44:46.622265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:08.896 [2024-07-24 09:44:46.622292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:08.896 [2024-07-24 09:44:46.622324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:08.896 [2024-07-24 09:44:46.622360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:08.896 [2024-07-24 09:44:46.622398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.896 [2024-07-24 09:44:46.622420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:08.896 [2024-07-24 09:44:46.622433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:08.896 [2024-07-24 09:44:46.622442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.896 [2024-07-24 09:44:46.622450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:08.896 [2024-07-24 09:44:46.622463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:08.896 [2024-07-24 09:44:46.622472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:08.896 [2024-07-24 09:44:46.622491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:08.896 [2024-07-24 09:44:46.622500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622510] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:08.896 [2024-07-24 09:44:46.622519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:08.896 [2024-07-24 09:44:46.622529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.896 [2024-07-24 09:44:46.622552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:08.896 [2024-07-24 09:44:46.622561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:08.896 [2024-07-24 09:44:46.622570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:08.896 [2024-07-24 09:44:46.622579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:08.896 [2024-07-24 09:44:46.622588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:08.896 [2024-07-24 09:44:46.622603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:08.896 [2024-07-24 09:44:46.622614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:08.896 [2024-07-24 09:44:46.622633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:08.896 [2024-07-24 09:44:46.622655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:08.896 [2024-07-24 09:44:46.622666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:08.896 [2024-07-24 09:44:46.622676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:08.896 [2024-07-24 09:44:46.622686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:08.896 [2024-07-24 09:44:46.622696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:08.896 [2024-07-24 09:44:46.622706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:08.896 [2024-07-24 09:44:46.622719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:08.896 [2024-07-24 09:44:46.622730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:08.896 [2024-07-24 09:44:46.622740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:08.896 [2024-07-24 09:44:46.622789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:08.896 [2024-07-24 09:44:46.622800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:08.896 [2024-07-24 09:44:46.622821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:08.896 [2024-07-24 09:44:46.622831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:08.896 [2024-07-24 09:44:46.622841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:08.896 [2024-07-24 09:44:46.622852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.622864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:08.896 [2024-07-24 09:44:46.622874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:19:08.896 [2024-07-24 09:44:46.622886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.643708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.644013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:08.896 [2024-07-24 09:44:46.644113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.790 ms 00:19:08.896 [2024-07-24 09:44:46.644153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.644306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.644407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:08.896 [2024-07-24 09:44:46.644454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:08.896 [2024-07-24 09:44:46.644486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.655921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.656207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:08.896 [2024-07-24 09:44:46.656310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.288 ms 00:19:08.896 [2024-07-24 09:44:46.656353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.656437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.656484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:08.896 [2024-07-24 09:44:46.656576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:08.896 [2024-07-24 09:44:46.656627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.657187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.657330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:08.896 [2024-07-24 09:44:46.657409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:19:08.896 [2024-07-24 09:44:46.657443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.657591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.657627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:08.896 [2024-07-24 09:44:46.657701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:08.896 [2024-07-24 09:44:46.657742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.663853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.663996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:08.896 [2024-07-24 09:44:46.664071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.073 ms 00:19:08.896 [2024-07-24 09:44:46.664105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.666681] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:08.896 [2024-07-24 09:44:46.666843] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:08.896 [2024-07-24 09:44:46.666934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.896 [2024-07-24 09:44:46.666967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:08.896 [2024-07-24 09:44:46.666985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:19:08.896 [2024-07-24 09:44:46.666995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.896 [2024-07-24 09:44:46.679691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.897 [2024-07-24 09:44:46.679732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:08.897 [2024-07-24 09:44:46.679746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:19:08.897 [2024-07-24 09:44:46.679756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.897 [2024-07-24 09:44:46.681495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.897 [2024-07-24 09:44:46.681528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:08.897 [2024-07-24 09:44:46.681540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:19:08.897 [2024-07-24 09:44:46.681549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.897 [2024-07-24 09:44:46.683140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.897 [2024-07-24 09:44:46.683174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:08.897 [2024-07-24 09:44:46.683186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:19:08.897 [2024-07-24 09:44:46.683212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.897 [2024-07-24 09:44:46.683490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.897 [2024-07-24 09:44:46.683513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:08.897 [2024-07-24 09:44:46.683530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:19:08.897 [2024-07-24 09:44:46.683543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.897 [2024-07-24 09:44:46.704258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.897 [2024-07-24 09:44:46.704319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:08.897 [2024-07-24 09:44:46.704336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.722 ms 00:19:08.897 [2024-07-24 09:44:46.704346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.897 [2024-07-24 09:44:46.710671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:09.155 [2024-07-24 09:44:46.713522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.713555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:09.155 [2024-07-24 09:44:46.713568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.148 ms 00:19:09.155 [2024-07-24 09:44:46.713577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.713671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.713692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:09.155 [2024-07-24 09:44:46.713711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:09.155 [2024-07-24 09:44:46.713720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.713781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.713793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:09.155 [2024-07-24 09:44:46.713803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:09.155 [2024-07-24 09:44:46.713813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.713832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.713843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:09.155 [2024-07-24 09:44:46.713853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:09.155 [2024-07-24 09:44:46.713862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.713896] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:09.155 [2024-07-24 09:44:46.713908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.713921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:09.155 [2024-07-24 09:44:46.713938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:09.155 [2024-07-24 09:44:46.713948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.717510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.717543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:09.155 [2024-07-24 09:44:46.717556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.536 ms 00:19:09.155 [2024-07-24 09:44:46.717566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.717654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.155 [2024-07-24 09:44:46.717667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:09.155 [2024-07-24 09:44:46.717685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:09.155 [2024-07-24 09:44:46.717694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.155 [2024-07-24 09:44:46.718789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.728 ms, result 0 00:19:45.389  Copying: 28/1024 [MB] (28 MBps) Copying: 58/1024 [MB] (29 MBps) Copying: 87/1024 [MB] (29 MBps) Copying: 119/1024 [MB] (31 MBps) Copying: 148/1024 [MB] (28 MBps) Copying: 174/1024 [MB] (26 MBps) Copying: 200/1024 [MB] (25 MBps) Copying: 227/1024 [MB] (26 MBps) Copying: 254/1024 [MB] (26 MBps) Copying: 280/1024 [MB] (26 MBps) Copying: 306/1024 [MB] (26 MBps) Copying: 333/1024 [MB] (26 MBps) Copying: 359/1024 [MB] (26 MBps) Copying: 385/1024 [MB] (25 MBps) Copying: 411/1024 [MB] (26 MBps) Copying: 436/1024 [MB] (25 MBps) Copying: 464/1024 [MB] (27 MBps) Copying: 492/1024 [MB] (28 MBps) Copying: 520/1024 [MB] (28 MBps) Copying: 548/1024 [MB] (27 MBps) Copying: 576/1024 [MB] (27 MBps) Copying: 604/1024 [MB] (27 MBps) Copying: 635/1024 [MB] (31 MBps) Copying: 663/1024 [MB] (27 MBps) Copying: 690/1024 [MB] (27 MBps) Copying: 719/1024 [MB] (29 MBps) Copying: 749/1024 [MB] (29 MBps) Copying: 778/1024 [MB] (29 MBps) Copying: 808/1024 [MB] (29 MBps) Copying: 837/1024 [MB] (29 MBps) Copying: 867/1024 [MB] (29 MBps) Copying: 903/1024 [MB] (36 MBps) Copying: 932/1024 [MB] (28 MBps) Copying: 959/1024 [MB] (27 MBps) Copying: 986/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-24 09:45:23.074210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.074275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:45.389 [2024-07-24 09:45:23.074291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:45.389 [2024-07-24 09:45:23.074301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.074323] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.389 [2024-07-24 09:45:23.074989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.075010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:45.389 [2024-07-24 09:45:23.075021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:45.389 [2024-07-24 09:45:23.075030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.076822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.076871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:45.389 [2024-07-24 09:45:23.076884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:19:45.389 [2024-07-24 09:45:23.076894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.094386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.094431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.389 [2024-07-24 09:45:23.094445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.503 ms 00:19:45.389 [2024-07-24 09:45:23.094455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.099507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.099542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:45.389 [2024-07-24 09:45:23.099560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:19:45.389 [2024-07-24 09:45:23.099580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.101330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.101366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.389 [2024-07-24 09:45:23.101378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:19:45.389 [2024-07-24 09:45:23.101387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.104759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.104795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.389 [2024-07-24 09:45:23.104807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:19:45.389 [2024-07-24 09:45:23.104816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.104931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.104943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.389 [2024-07-24 09:45:23.104961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:45.389 [2024-07-24 09:45:23.104970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.107017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.107049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:45.389 [2024-07-24 09:45:23.107061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:19:45.389 [2024-07-24 09:45:23.107069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.108506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.108538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:45.389 [2024-07-24 09:45:23.108548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:19:45.389 [2024-07-24 09:45:23.108557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.109861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.109899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:45.389 [2024-07-24 09:45:23.109923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:19:45.389 [2024-07-24 09:45:23.109932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.111209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.389 [2024-07-24 09:45:23.111241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:45.389 [2024-07-24 09:45:23.111252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:19:45.389 [2024-07-24 09:45:23.111261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.389 [2024-07-24 09:45:23.111341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:45.389 [2024-07-24 09:45:23.111371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.111996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:45.389 [2024-07-24 09:45:23.112380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:45.390 [2024-07-24 09:45:23.112390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:45.390 [2024-07-24 09:45:23.112400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:45.390 [2024-07-24 09:45:23.112410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:45.390 [2024-07-24 09:45:23.112428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:45.390 [2024-07-24 09:45:23.112438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d23807e-12d1-42c4-9707-0f26913c394b 00:19:45.390 [2024-07-24 09:45:23.112448] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:45.390 [2024-07-24 09:45:23.112458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:45.390 [2024-07-24 09:45:23.112467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:45.390 [2024-07-24 09:45:23.112477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:45.390 [2024-07-24 09:45:23.112486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:45.390 [2024-07-24 09:45:23.112499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:45.390 [2024-07-24 09:45:23.112508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:45.390 [2024-07-24 09:45:23.112517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:45.390 [2024-07-24 09:45:23.112526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:45.390 [2024-07-24 09:45:23.112535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.390 [2024-07-24 09:45:23.112545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:45.390 [2024-07-24 09:45:23.112555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:19:45.390 [2024-07-24 09:45:23.112564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.114321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.390 [2024-07-24 09:45:23.114345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:45.390 [2024-07-24 09:45:23.114357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:19:45.390 [2024-07-24 09:45:23.114371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.114471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.390 [2024-07-24 09:45:23.114482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:45.390 [2024-07-24 09:45:23.114501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:45.390 [2024-07-24 09:45:23.114510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.120477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.120502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.390 [2024-07-24 09:45:23.120518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.120528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.120572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.120583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.390 [2024-07-24 09:45:23.120593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.120602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.120663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.120675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.390 [2024-07-24 09:45:23.120685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.120699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.120721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.120733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.390 [2024-07-24 09:45:23.120743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.120752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.133847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.133905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.390 [2024-07-24 09:45:23.133928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.133938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.390 [2024-07-24 09:45:23.142356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.390 [2024-07-24 09:45:23.142442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.390 [2024-07-24 09:45:23.142505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.390 [2024-07-24 09:45:23.142622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:45.390 [2024-07-24 09:45:23.142689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.390 [2024-07-24 09:45:23.142764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.390 [2024-07-24 09:45:23.142832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.390 [2024-07-24 09:45:23.142842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.390 [2024-07-24 09:45:23.142854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-07-24 09:45:23.142971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.856 ms, result 0 00:19:45.957 00:19:45.957 00:19:45.957 09:45:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:46.216 [2024-07-24 09:45:23.796230] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:19:46.216 [2024-07-24 09:45:23.796367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91295 ] 00:19:46.216 [2024-07-24 09:45:23.964357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.216 [2024-07-24 09:45:24.009180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.476 [2024-07-24 09:45:24.110426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.476 [2024-07-24 09:45:24.110506] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.476 [2024-07-24 09:45:24.268973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.269035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.476 [2024-07-24 09:45:24.269051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.476 [2024-07-24 09:45:24.269062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.269123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.269145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.476 [2024-07-24 09:45:24.269163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:46.476 [2024-07-24 09:45:24.269172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.269214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.476 [2024-07-24 09:45:24.269432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.476 [2024-07-24 09:45:24.269450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.269461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.476 [2024-07-24 09:45:24.269471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:46.476 [2024-07-24 09:45:24.269484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.271017] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:46.476 [2024-07-24 09:45:24.273517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.273566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:46.476 [2024-07-24 09:45:24.273579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:19:46.476 [2024-07-24 09:45:24.273596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.273655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.273667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:46.476 [2024-07-24 09:45:24.273677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:46.476 [2024-07-24 09:45:24.273687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.280370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.280402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.476 [2024-07-24 09:45:24.280414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:19:46.476 [2024-07-24 09:45:24.280423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.280516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.280541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.476 [2024-07-24 09:45:24.280556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:46.476 [2024-07-24 09:45:24.280566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.280622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.280636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.476 [2024-07-24 09:45:24.280656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:46.476 [2024-07-24 09:45:24.280666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.280691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.476 [2024-07-24 09:45:24.282343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.282372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.476 [2024-07-24 09:45:24.282384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:19:46.476 [2024-07-24 09:45:24.282393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.282429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.476 [2024-07-24 09:45:24.282440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.476 [2024-07-24 09:45:24.282450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.476 [2024-07-24 09:45:24.282468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.476 [2024-07-24 09:45:24.282489] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:46.476 [2024-07-24 09:45:24.282513] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:46.476 [2024-07-24 09:45:24.282547] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:46.476 [2024-07-24 09:45:24.282569] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:46.477 [2024-07-24 09:45:24.282652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.477 [2024-07-24 09:45:24.282665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.477 [2024-07-24 09:45:24.282684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:46.477 [2024-07-24 09:45:24.282709] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.477 [2024-07-24 09:45:24.282721] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.477 [2024-07-24 09:45:24.282731] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.477 [2024-07-24 09:45:24.282740] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.477 [2024-07-24 09:45:24.282757] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.477 [2024-07-24 09:45:24.282767] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.477 [2024-07-24 09:45:24.282777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.477 [2024-07-24 09:45:24.282790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.477 [2024-07-24 09:45:24.282800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:46.477 [2024-07-24 09:45:24.282809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.477 [2024-07-24 09:45:24.282878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.477 [2024-07-24 09:45:24.282888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.477 [2024-07-24 09:45:24.282908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:46.477 [2024-07-24 09:45:24.282917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.477 [2024-07-24 09:45:24.283008] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.477 [2024-07-24 09:45:24.283023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.477 [2024-07-24 09:45:24.283037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.477 [2024-07-24 09:45:24.283073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.477 [2024-07-24 09:45:24.283101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.477 [2024-07-24 09:45:24.283128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.477 [2024-07-24 09:45:24.283138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.477 [2024-07-24 09:45:24.283147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.477 [2024-07-24 09:45:24.283157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.477 [2024-07-24 09:45:24.283170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.477 [2024-07-24 09:45:24.283179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.477 [2024-07-24 09:45:24.283209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.477 [2024-07-24 09:45:24.283236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.477 [2024-07-24 09:45:24.283263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.477 [2024-07-24 09:45:24.283290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.477 [2024-07-24 09:45:24.283316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.477 [2024-07-24 09:45:24.283349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.477 [2024-07-24 09:45:24.283367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.477 [2024-07-24 09:45:24.283376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.477 [2024-07-24 09:45:24.283385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.477 [2024-07-24 09:45:24.283394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.477 [2024-07-24 09:45:24.283403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.477 [2024-07-24 09:45:24.283411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.477 [2024-07-24 09:45:24.283429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.477 [2024-07-24 09:45:24.283438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283446] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.477 [2024-07-24 09:45:24.283456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.477 [2024-07-24 09:45:24.283467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.477 [2024-07-24 09:45:24.283490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.477 [2024-07-24 09:45:24.283499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.477 [2024-07-24 09:45:24.283508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.477 [2024-07-24 09:45:24.283517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.477 [2024-07-24 09:45:24.283525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.477 [2024-07-24 09:45:24.283535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.477 [2024-07-24 09:45:24.283545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.477 [2024-07-24 09:45:24.283564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.477 [2024-07-24 09:45:24.283586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.477 [2024-07-24 09:45:24.283596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.477 [2024-07-24 09:45:24.283606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.477 [2024-07-24 09:45:24.283616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.477 [2024-07-24 09:45:24.283626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.477 [2024-07-24 09:45:24.283636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.477 [2024-07-24 09:45:24.283649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.477 [2024-07-24 09:45:24.283660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.477 [2024-07-24 09:45:24.283670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.477 [2024-07-24 09:45:24.283718] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.477 [2024-07-24 09:45:24.283728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.477 [2024-07-24 09:45:24.283748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.478 [2024-07-24 09:45:24.283759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.478 [2024-07-24 09:45:24.283768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.478 [2024-07-24 09:45:24.283779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.478 [2024-07-24 09:45:24.283792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.478 [2024-07-24 09:45:24.283801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:19:46.478 [2024-07-24 09:45:24.283813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.304602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.304642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.737 [2024-07-24 09:45:24.304655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.765 ms 00:19:46.737 [2024-07-24 09:45:24.304665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.304751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.304763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:46.737 [2024-07-24 09:45:24.304788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:46.737 [2024-07-24 09:45:24.304798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.315787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.315834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.737 [2024-07-24 09:45:24.315851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.942 ms 00:19:46.737 [2024-07-24 09:45:24.315864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.315908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.315927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.737 [2024-07-24 09:45:24.315940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.737 [2024-07-24 09:45:24.315952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.316466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.316502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.737 [2024-07-24 09:45:24.316520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:19:46.737 [2024-07-24 09:45:24.316532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.316670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.316687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.737 [2024-07-24 09:45:24.316704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:46.737 [2024-07-24 09:45:24.316715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.322742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.322781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.737 [2024-07-24 09:45:24.322794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.011 ms 00:19:46.737 [2024-07-24 09:45:24.322812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.325465] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:46.737 [2024-07-24 09:45:24.325505] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:46.737 [2024-07-24 09:45:24.325520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.325530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:46.737 [2024-07-24 09:45:24.325544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:19:46.737 [2024-07-24 09:45:24.325562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.338206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.338251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:46.737 [2024-07-24 09:45:24.338265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:19:46.737 [2024-07-24 09:45:24.338275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.340107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.340141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:46.737 [2024-07-24 09:45:24.340153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:19:46.737 [2024-07-24 09:45:24.340162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.341588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.341624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:46.737 [2024-07-24 09:45:24.341636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:19:46.737 [2024-07-24 09:45:24.341646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.341937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.341960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:46.737 [2024-07-24 09:45:24.341981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:19:46.737 [2024-07-24 09:45:24.341991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.362287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.737 [2024-07-24 09:45:24.362359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:46.737 [2024-07-24 09:45:24.362387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.308 ms 00:19:46.737 [2024-07-24 09:45:24.362398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.737 [2024-07-24 09:45:24.368664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:46.737 [2024-07-24 09:45:24.371683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.371716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:46.738 [2024-07-24 09:45:24.371729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.251 ms 00:19:46.738 [2024-07-24 09:45:24.371739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.371820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.371835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:46.738 [2024-07-24 09:45:24.371846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.738 [2024-07-24 09:45:24.371871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.371967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.371978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:46.738 [2024-07-24 09:45:24.371989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:46.738 [2024-07-24 09:45:24.372000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.372024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.372034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:46.738 [2024-07-24 09:45:24.372044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.738 [2024-07-24 09:45:24.372053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.372087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:46.738 [2024-07-24 09:45:24.372098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.372107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:46.738 [2024-07-24 09:45:24.372117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:46.738 [2024-07-24 09:45:24.372126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.375808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.375843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:46.738 [2024-07-24 09:45:24.375856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:19:46.738 [2024-07-24 09:45:24.375866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.375928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.738 [2024-07-24 09:45:24.375946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:46.738 [2024-07-24 09:45:24.375956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:46.738 [2024-07-24 09:45:24.375966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.738 [2024-07-24 09:45:24.377126] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.915 ms, result 0 00:20:20.776  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (31 MBps) Copying: 121/1024 [MB] (33 MBps) Copying: 150/1024 [MB] (29 MBps) Copying: 180/1024 [MB] (29 MBps) Copying: 212/1024 [MB] (32 MBps) Copying: 253/1024 [MB] (41 MBps) Copying: 291/1024 [MB] (38 MBps) Copying: 325/1024 [MB] (33 MBps) Copying: 355/1024 [MB] (30 MBps) Copying: 386/1024 [MB] (31 MBps) Copying: 417/1024 [MB] (30 MBps) Copying: 447/1024 [MB] (30 MBps) Copying: 478/1024 [MB] (30 MBps) Copying: 508/1024 [MB] (30 MBps) Copying: 539/1024 [MB] (30 MBps) Copying: 568/1024 [MB] (29 MBps) Copying: 597/1024 [MB] (28 MBps) Copying: 625/1024 [MB] (27 MBps) Copying: 653/1024 [MB] (28 MBps) Copying: 681/1024 [MB] (27 MBps) Copying: 710/1024 [MB] (28 MBps) Copying: 738/1024 [MB] (28 MBps) Copying: 767/1024 [MB] (28 MBps) Copying: 796/1024 [MB] (29 MBps) Copying: 826/1024 [MB] (29 MBps) Copying: 856/1024 [MB] (29 MBps) Copying: 887/1024 [MB] (31 MBps) Copying: 916/1024 [MB] (29 MBps) Copying: 945/1024 [MB] (29 MBps) Copying: 974/1024 [MB] (28 MBps) Copying: 1003/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-24 09:45:58.359682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.359759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.776 [2024-07-24 09:45:58.359781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.776 [2024-07-24 09:45:58.359796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.359829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.776 [2024-07-24 09:45:58.360566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.360585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.776 [2024-07-24 09:45:58.360600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:20:20.776 [2024-07-24 09:45:58.360615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.360893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.360916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.776 [2024-07-24 09:45:58.360931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:20:20.776 [2024-07-24 09:45:58.360945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.365007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.365060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.776 [2024-07-24 09:45:58.365077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:20:20.776 [2024-07-24 09:45:58.365091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.372600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.372651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.776 [2024-07-24 09:45:58.372667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.494 ms 00:20:20.776 [2024-07-24 09:45:58.372680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.374571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.374616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.776 [2024-07-24 09:45:58.374632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:20:20.776 [2024-07-24 09:45:58.374645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.378633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.378678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.776 [2024-07-24 09:45:58.378691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:20:20.776 [2024-07-24 09:45:58.378701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.378808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.378824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.776 [2024-07-24 09:45:58.378836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:20.776 [2024-07-24 09:45:58.378845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.380928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.380964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:20.776 [2024-07-24 09:45:58.380976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:20:20.776 [2024-07-24 09:45:58.380986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.382507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.382542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:20.776 [2024-07-24 09:45:58.382553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:20:20.776 [2024-07-24 09:45:58.382563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.383944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.383999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.776 [2024-07-24 09:45:58.384010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:20:20.776 [2024-07-24 09:45:58.384019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.385141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.776 [2024-07-24 09:45:58.385178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.776 [2024-07-24 09:45:58.385206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:20:20.776 [2024-07-24 09:45:58.385216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.776 [2024-07-24 09:45:58.385245] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.776 [2024-07-24 09:45:58.385261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.776 [2024-07-24 09:45:58.385274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.776 [2024-07-24 09:45:58.385285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.385992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.777 [2024-07-24 09:45:58.386185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.778 [2024-07-24 09:45:58.386330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.778 [2024-07-24 09:45:58.386340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d23807e-12d1-42c4-9707-0f26913c394b 00:20:20.778 [2024-07-24 09:45:58.386351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.778 [2024-07-24 09:45:58.386361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.778 [2024-07-24 09:45:58.386371] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.778 [2024-07-24 09:45:58.386386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.778 [2024-07-24 09:45:58.386396] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.778 [2024-07-24 09:45:58.386406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.778 [2024-07-24 09:45:58.386415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.778 [2024-07-24 09:45:58.386424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.778 [2024-07-24 09:45:58.386433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.778 [2024-07-24 09:45:58.386443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.778 [2024-07-24 09:45:58.386453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.778 [2024-07-24 09:45:58.386463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:20:20.778 [2024-07-24 09:45:58.386482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.388184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.778 [2024-07-24 09:45:58.388217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.778 [2024-07-24 09:45:58.388235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:20:20.778 [2024-07-24 09:45:58.388252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.388363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.778 [2024-07-24 09:45:58.388383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.778 [2024-07-24 09:45:58.388401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:20.778 [2024-07-24 09:45:58.388411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.394422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.394463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.778 [2024-07-24 09:45:58.394474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.394484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.394535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.394546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.778 [2024-07-24 09:45:58.394556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.394566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.394633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.394646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.778 [2024-07-24 09:45:58.394661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.394671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.394689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.394699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.778 [2024-07-24 09:45:58.394708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.394718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.408072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.408132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.778 [2024-07-24 09:45:58.408144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.408154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.778 [2024-07-24 09:45:58.416464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.778 [2024-07-24 09:45:58.416561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.778 [2024-07-24 09:45:58.416624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.778 [2024-07-24 09:45:58.416735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.778 [2024-07-24 09:45:58.416802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.778 [2024-07-24 09:45:58.416866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.416920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.778 [2024-07-24 09:45:58.416931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.778 [2024-07-24 09:45:58.416948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.778 [2024-07-24 09:45:58.416958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.778 [2024-07-24 09:45:58.417081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.469 ms, result 0 00:20:21.087 00:20:21.087 00:20:21.087 09:45:58 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:23.005 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:23.005 09:46:00 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:23.005 [2024-07-24 09:46:00.468541] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:20:23.005 [2024-07-24 09:46:00.468654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91671 ] 00:20:23.005 [2024-07-24 09:46:00.634304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.005 [2024-07-24 09:46:00.679461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.005 [2024-07-24 09:46:00.780715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.005 [2024-07-24 09:46:00.780790] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.265 [2024-07-24 09:46:00.939090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.939149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.265 [2024-07-24 09:46:00.939165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.265 [2024-07-24 09:46:00.939175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.939253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.939271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.265 [2024-07-24 09:46:00.939282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:23.265 [2024-07-24 09:46:00.939292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.939314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.265 [2024-07-24 09:46:00.939565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.265 [2024-07-24 09:46:00.939585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.939595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.265 [2024-07-24 09:46:00.939606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:23.265 [2024-07-24 09:46:00.939619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.941066] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.265 [2024-07-24 09:46:00.943599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.943636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.265 [2024-07-24 09:46:00.943649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:20:23.265 [2024-07-24 09:46:00.943667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.943725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.943743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.265 [2024-07-24 09:46:00.943761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:23.265 [2024-07-24 09:46:00.943778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.950384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.950412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.265 [2024-07-24 09:46:00.950424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.540 ms 00:20:23.265 [2024-07-24 09:46:00.950434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.950527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.950540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.265 [2024-07-24 09:46:00.950554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:23.265 [2024-07-24 09:46:00.950564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.950618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.950634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.265 [2024-07-24 09:46:00.950648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:23.265 [2024-07-24 09:46:00.950657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.950690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.265 [2024-07-24 09:46:00.952299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.952325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.265 [2024-07-24 09:46:00.952336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:20:23.265 [2024-07-24 09:46:00.952345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.952389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.265 [2024-07-24 09:46:00.952400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.265 [2024-07-24 09:46:00.952410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.265 [2024-07-24 09:46:00.952419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.265 [2024-07-24 09:46:00.952440] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.265 [2024-07-24 09:46:00.952462] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:23.265 [2024-07-24 09:46:00.952497] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.265 [2024-07-24 09:46:00.952526] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:23.265 [2024-07-24 09:46:00.952615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:23.265 [2024-07-24 09:46:00.952627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.265 [2024-07-24 09:46:00.952640] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:23.266 [2024-07-24 09:46:00.952653] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.266 [2024-07-24 09:46:00.952664] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.266 [2024-07-24 09:46:00.952675] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:23.266 [2024-07-24 09:46:00.952684] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.266 [2024-07-24 09:46:00.952701] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:23.266 [2024-07-24 09:46:00.952718] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:23.266 [2024-07-24 09:46:00.952728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.266 [2024-07-24 09:46:00.952741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.266 [2024-07-24 09:46:00.952752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:23.266 [2024-07-24 09:46:00.952761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.266 [2024-07-24 09:46:00.952828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.266 [2024-07-24 09:46:00.952838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.266 [2024-07-24 09:46:00.952852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:23.266 [2024-07-24 09:46:00.952875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.266 [2024-07-24 09:46:00.952964] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.266 [2024-07-24 09:46:00.952977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.266 [2024-07-24 09:46:00.952991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.266 [2024-07-24 09:46:00.953028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.266 [2024-07-24 09:46:00.953066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.266 [2024-07-24 09:46:00.953094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.266 [2024-07-24 09:46:00.953103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:23.266 [2024-07-24 09:46:00.953112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.266 [2024-07-24 09:46:00.953122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.266 [2024-07-24 09:46:00.953135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:23.266 [2024-07-24 09:46:00.953145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.266 [2024-07-24 09:46:00.953163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.266 [2024-07-24 09:46:00.953208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.266 [2024-07-24 09:46:00.953236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.266 [2024-07-24 09:46:00.953263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.266 [2024-07-24 09:46:00.953290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.266 [2024-07-24 09:46:00.953323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.266 [2024-07-24 09:46:00.953341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.266 [2024-07-24 09:46:00.953350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:23.266 [2024-07-24 09:46:00.953359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.266 [2024-07-24 09:46:00.953367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:23.266 [2024-07-24 09:46:00.953376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:23.266 [2024-07-24 09:46:00.953385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:23.266 [2024-07-24 09:46:00.953403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:23.266 [2024-07-24 09:46:00.953412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953421] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.266 [2024-07-24 09:46:00.953431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.266 [2024-07-24 09:46:00.953441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.266 [2024-07-24 09:46:00.953464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.266 [2024-07-24 09:46:00.953473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.266 [2024-07-24 09:46:00.953482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.266 [2024-07-24 09:46:00.953491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.266 [2024-07-24 09:46:00.953500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.266 [2024-07-24 09:46:00.953509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.266 [2024-07-24 09:46:00.953519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.266 [2024-07-24 09:46:00.953532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:23.266 [2024-07-24 09:46:00.953553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:23.266 [2024-07-24 09:46:00.953563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:23.266 [2024-07-24 09:46:00.953573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:23.266 [2024-07-24 09:46:00.953583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:23.266 [2024-07-24 09:46:00.953593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:23.266 [2024-07-24 09:46:00.953604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:23.266 [2024-07-24 09:46:00.953616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:23.266 [2024-07-24 09:46:00.953626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:23.266 [2024-07-24 09:46:00.953637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:23.266 [2024-07-24 09:46:00.953686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.266 [2024-07-24 09:46:00.953697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.266 [2024-07-24 09:46:00.953725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.266 [2024-07-24 09:46:00.953735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.266 [2024-07-24 09:46:00.953745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.266 [2024-07-24 09:46:00.953756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.266 [2024-07-24 09:46:00.953770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.267 [2024-07-24 09:46:00.953781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:20:23.267 [2024-07-24 09:46:00.953793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.975710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.975909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.267 [2024-07-24 09:46:00.976079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.892 ms 00:20:23.267 [2024-07-24 09:46:00.976129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.976283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.976386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.267 [2024-07-24 09:46:00.976463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:23.267 [2024-07-24 09:46:00.976503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.987222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.987384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.267 [2024-07-24 09:46:00.987510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.628 ms 00:20:23.267 [2024-07-24 09:46:00.987553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.987618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.987661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.267 [2024-07-24 09:46:00.987738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.267 [2024-07-24 09:46:00.987775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.988290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.988399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.267 [2024-07-24 09:46:00.988500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:20:23.267 [2024-07-24 09:46:00.988548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.988702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.988748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.267 [2024-07-24 09:46:00.988829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:23.267 [2024-07-24 09:46:00.988866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.994923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.995055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.267 [2024-07-24 09:46:00.995125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.018 ms 00:20:23.267 [2024-07-24 09:46:00.995159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:00.997749] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:23.267 [2024-07-24 09:46:00.997898] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.267 [2024-07-24 09:46:00.997989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:00.998021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.267 [2024-07-24 09:46:00.998096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:20:23.267 [2024-07-24 09:46:00.998139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.010769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.010910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.267 [2024-07-24 09:46:01.010932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.580 ms 00:20:23.267 [2024-07-24 09:46:01.010943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.012906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.012940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.267 [2024-07-24 09:46:01.012952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:20:23.267 [2024-07-24 09:46:01.012961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.014432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.014463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.267 [2024-07-24 09:46:01.014475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:20:23.267 [2024-07-24 09:46:01.014484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.014765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.014781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.267 [2024-07-24 09:46:01.014802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:23.267 [2024-07-24 09:46:01.014812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.034723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.034790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.267 [2024-07-24 09:46:01.034807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.922 ms 00:20:23.267 [2024-07-24 09:46:01.034817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.041027] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:23.267 [2024-07-24 09:46:01.043795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.043823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.267 [2024-07-24 09:46:01.043836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.946 ms 00:20:23.267 [2024-07-24 09:46:01.043846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.043933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.043950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.267 [2024-07-24 09:46:01.043969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.267 [2024-07-24 09:46:01.043985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.044065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.044076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.267 [2024-07-24 09:46:01.044095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:23.267 [2024-07-24 09:46:01.044105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.044128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.044139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.267 [2024-07-24 09:46:01.044157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.267 [2024-07-24 09:46:01.044166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.044209] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.267 [2024-07-24 09:46:01.044222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.044239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.267 [2024-07-24 09:46:01.044249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:23.267 [2024-07-24 09:46:01.044258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.047817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.047861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.267 [2024-07-24 09:46:01.047873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.543 ms 00:20:23.267 [2024-07-24 09:46:01.047892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.047956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.267 [2024-07-24 09:46:01.047971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.267 [2024-07-24 09:46:01.047982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:23.267 [2024-07-24 09:46:01.047991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.267 [2024-07-24 09:46:01.049002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.699 ms, result 0 00:20:59.856  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (29 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (27 MBps) Copying: 139/1024 [MB] (27 MBps) Copying: 166/1024 [MB] (26 MBps) Copying: 193/1024 [MB] (27 MBps) Copying: 220/1024 [MB] (27 MBps) Copying: 247/1024 [MB] (26 MBps) Copying: 274/1024 [MB] (27 MBps) Copying: 301/1024 [MB] (27 MBps) Copying: 332/1024 [MB] (30 MBps) Copying: 366/1024 [MB] (33 MBps) Copying: 397/1024 [MB] (31 MBps) Copying: 432/1024 [MB] (34 MBps) Copying: 463/1024 [MB] (31 MBps) Copying: 492/1024 [MB] (29 MBps) Copying: 521/1024 [MB] (29 MBps) Copying: 551/1024 [MB] (30 MBps) Copying: 579/1024 [MB] (27 MBps) Copying: 607/1024 [MB] (28 MBps) Copying: 635/1024 [MB] (27 MBps) Copying: 664/1024 [MB] (28 MBps) Copying: 696/1024 [MB] (32 MBps) Copying: 726/1024 [MB] (29 MBps) Copying: 756/1024 [MB] (29 MBps) Copying: 785/1024 [MB] (28 MBps) Copying: 814/1024 [MB] (28 MBps) Copying: 841/1024 [MB] (27 MBps) Copying: 868/1024 [MB] (26 MBps) Copying: 896/1024 [MB] (27 MBps) Copying: 923/1024 [MB] (27 MBps) Copying: 950/1024 [MB] (27 MBps) Copying: 979/1024 [MB] (28 MBps) Copying: 1006/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-24 09:46:37.443410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.443473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.856 [2024-07-24 09:46:37.443491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:59.856 [2024-07-24 09:46:37.443501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.445027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.856 [2024-07-24 09:46:37.448205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.448241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.856 [2024-07-24 09:46:37.448255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:20:59.856 [2024-07-24 09:46:37.448266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.457635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.457691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.856 [2024-07-24 09:46:37.457706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.422 ms 00:20:59.856 [2024-07-24 09:46:37.457728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.481555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.481608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.856 [2024-07-24 09:46:37.481623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.839 ms 00:20:59.856 [2024-07-24 09:46:37.481644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.486741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.486775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.856 [2024-07-24 09:46:37.486787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:20:59.856 [2024-07-24 09:46:37.486797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.488491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.488525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.856 [2024-07-24 09:46:37.488537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:20:59.856 [2024-07-24 09:46:37.488547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.492146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.492204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.856 [2024-07-24 09:46:37.492217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:20:59.856 [2024-07-24 09:46:37.492227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.608536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.608592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.856 [2024-07-24 09:46:37.608608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.462 ms 00:20:59.856 [2024-07-24 09:46:37.608618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.610919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.610958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:59.856 [2024-07-24 09:46:37.610970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:20:59.856 [2024-07-24 09:46:37.610979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.612411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.612446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:59.856 [2024-07-24 09:46:37.612458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:20:59.856 [2024-07-24 09:46:37.612468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.613619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.613665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.856 [2024-07-24 09:46:37.613676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:20:59.856 [2024-07-24 09:46:37.613686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.614816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.856 [2024-07-24 09:46:37.614849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.856 [2024-07-24 09:46:37.614860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:20:59.856 [2024-07-24 09:46:37.614869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.856 [2024-07-24 09:46:37.614895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.856 [2024-07-24 09:46:37.614912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110080 / 261120 wr_cnt: 1 state: open 00:20:59.856 [2024-07-24 09:46:37.614930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.614995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.856 [2024-07-24 09:46:37.615344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.615988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.857 [2024-07-24 09:46:37.616006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.857 [2024-07-24 09:46:37.616016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d23807e-12d1-42c4-9707-0f26913c394b 00:20:59.857 [2024-07-24 09:46:37.616026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110080 00:20:59.857 [2024-07-24 09:46:37.616036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111040 00:20:59.857 [2024-07-24 09:46:37.616045] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110080 00:20:59.857 [2024-07-24 09:46:37.616056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:20:59.857 [2024-07-24 09:46:37.616065] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.857 [2024-07-24 09:46:37.616075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.857 [2024-07-24 09:46:37.616084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.857 [2024-07-24 09:46:37.616094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.857 [2024-07-24 09:46:37.616102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.857 [2024-07-24 09:46:37.616112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.857 [2024-07-24 09:46:37.616122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.857 [2024-07-24 09:46:37.616132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:20:59.857 [2024-07-24 09:46:37.616149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.857 [2024-07-24 09:46:37.618162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.857 [2024-07-24 09:46:37.618296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.857 [2024-07-24 09:46:37.618366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:20:59.857 [2024-07-24 09:46:37.618401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.857 [2024-07-24 09:46:37.618546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.857 [2024-07-24 09:46:37.618592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.857 [2024-07-24 09:46:37.618661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:59.857 [2024-07-24 09:46:37.618695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.857 [2024-07-24 09:46:37.624713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.857 [2024-07-24 09:46:37.624833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.857 [2024-07-24 09:46:37.624903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.857 [2024-07-24 09:46:37.624936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.857 [2024-07-24 09:46:37.625024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.857 [2024-07-24 09:46:37.625110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.857 [2024-07-24 09:46:37.625145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.857 [2024-07-24 09:46:37.625174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.625319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.625433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.858 [2024-07-24 09:46:37.625501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.625535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.625576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.625671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.858 [2024-07-24 09:46:37.625739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.625769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.637637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.637811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.858 [2024-07-24 09:46:37.637883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.637918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.646169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.646336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.858 [2024-07-24 09:46:37.646419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.646454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.646525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.646558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.858 [2024-07-24 09:46:37.646588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.646617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.646661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.646705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.858 [2024-07-24 09:46:37.646800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.646842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.646946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.647031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.858 [2024-07-24 09:46:37.647067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.647096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.647263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.647345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.858 [2024-07-24 09:46:37.647425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.647459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.647527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.647662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.858 [2024-07-24 09:46:37.647711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.647739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.647812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.858 [2024-07-24 09:46:37.647845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.858 [2024-07-24 09:46:37.647874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.858 [2024-07-24 09:46:37.647973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.858 [2024-07-24 09:46:37.648119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 207.269 ms, result 0 00:21:00.793 00:21:00.793 00:21:00.793 09:46:38 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:00.793 [2024-07-24 09:46:38.360803] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:21:00.793 [2024-07-24 09:46:38.360941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92060 ] 00:21:00.793 [2024-07-24 09:46:38.516177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.793 [2024-07-24 09:46:38.557481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.054 [2024-07-24 09:46:38.658647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.054 [2024-07-24 09:46:38.658717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.054 [2024-07-24 09:46:38.817144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.817228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.054 [2024-07-24 09:46:38.817245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.054 [2024-07-24 09:46:38.817255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.817312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.817328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.054 [2024-07-24 09:46:38.817339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:01.054 [2024-07-24 09:46:38.817348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.817369] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.054 [2024-07-24 09:46:38.817701] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.054 [2024-07-24 09:46:38.817723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.817740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.054 [2024-07-24 09:46:38.817762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:21:01.054 [2024-07-24 09:46:38.817773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.819232] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.054 [2024-07-24 09:46:38.821740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.821782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.054 [2024-07-24 09:46:38.821795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:21:01.054 [2024-07-24 09:46:38.821805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.821872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.821891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.054 [2024-07-24 09:46:38.821909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:01.054 [2024-07-24 09:46:38.821918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.828604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.828632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.054 [2024-07-24 09:46:38.828644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.630 ms 00:21:01.054 [2024-07-24 09:46:38.828654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.828747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.828764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.054 [2024-07-24 09:46:38.828782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:01.054 [2024-07-24 09:46:38.828792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.828857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.828872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.054 [2024-07-24 09:46:38.828889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:01.054 [2024-07-24 09:46:38.828899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.828930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.054 [2024-07-24 09:46:38.830572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.830599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.054 [2024-07-24 09:46:38.830619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:21:01.054 [2024-07-24 09:46:38.830632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.830671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.830682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.054 [2024-07-24 09:46:38.830692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:01.054 [2024-07-24 09:46:38.830701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.830724] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.054 [2024-07-24 09:46:38.830747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:01.054 [2024-07-24 09:46:38.830781] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.054 [2024-07-24 09:46:38.830804] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:01.054 [2024-07-24 09:46:38.830888] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.054 [2024-07-24 09:46:38.830901] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.054 [2024-07-24 09:46:38.830921] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:01.054 [2024-07-24 09:46:38.830934] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.054 [2024-07-24 09:46:38.830945] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.054 [2024-07-24 09:46:38.830956] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:01.054 [2024-07-24 09:46:38.830973] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.054 [2024-07-24 09:46:38.830983] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.054 [2024-07-24 09:46:38.830992] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.054 [2024-07-24 09:46:38.831005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.831015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.054 [2024-07-24 09:46:38.831032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:01.054 [2024-07-24 09:46:38.831049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.831115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.054 [2024-07-24 09:46:38.831133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.054 [2024-07-24 09:46:38.831143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:01.054 [2024-07-24 09:46:38.831152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.054 [2024-07-24 09:46:38.831263] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.054 [2024-07-24 09:46:38.831281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.054 [2024-07-24 09:46:38.831291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.054 [2024-07-24 09:46:38.831301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.054 [2024-07-24 09:46:38.831320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:01.054 [2024-07-24 09:46:38.831339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.054 [2024-07-24 09:46:38.831348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.054 [2024-07-24 09:46:38.831375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.054 [2024-07-24 09:46:38.831385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:01.054 [2024-07-24 09:46:38.831394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.054 [2024-07-24 09:46:38.831408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.054 [2024-07-24 09:46:38.831418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:01.054 [2024-07-24 09:46:38.831427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.054 [2024-07-24 09:46:38.831445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:01.054 [2024-07-24 09:46:38.831454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.054 [2024-07-24 09:46:38.831473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.054 [2024-07-24 09:46:38.831491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.054 [2024-07-24 09:46:38.831500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.054 [2024-07-24 09:46:38.831517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.054 [2024-07-24 09:46:38.831526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:01.054 [2024-07-24 09:46:38.831535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.055 [2024-07-24 09:46:38.831544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.055 [2024-07-24 09:46:38.831556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:01.055 [2024-07-24 09:46:38.831565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.055 [2024-07-24 09:46:38.831574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.055 [2024-07-24 09:46:38.831583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:01.055 [2024-07-24 09:46:38.831591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.055 [2024-07-24 09:46:38.831600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.055 [2024-07-24 09:46:38.831609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:01.055 [2024-07-24 09:46:38.831618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.055 [2024-07-24 09:46:38.831626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.055 [2024-07-24 09:46:38.831635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:01.055 [2024-07-24 09:46:38.831644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.055 [2024-07-24 09:46:38.831652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.055 [2024-07-24 09:46:38.831662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:01.055 [2024-07-24 09:46:38.831671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.055 [2024-07-24 09:46:38.831679] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.055 [2024-07-24 09:46:38.831689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.055 [2024-07-24 09:46:38.831708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.055 [2024-07-24 09:46:38.831718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.055 [2024-07-24 09:46:38.831728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.055 [2024-07-24 09:46:38.831737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.055 [2024-07-24 09:46:38.831746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.055 [2024-07-24 09:46:38.831755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.055 [2024-07-24 09:46:38.831764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.055 [2024-07-24 09:46:38.831773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.055 [2024-07-24 09:46:38.831783] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.055 [2024-07-24 09:46:38.831795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:01.055 [2024-07-24 09:46:38.831816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:01.055 [2024-07-24 09:46:38.831827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:01.055 [2024-07-24 09:46:38.831837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:01.055 [2024-07-24 09:46:38.831848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:01.055 [2024-07-24 09:46:38.831858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:01.055 [2024-07-24 09:46:38.831871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:01.055 [2024-07-24 09:46:38.831881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:01.055 [2024-07-24 09:46:38.831891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:01.055 [2024-07-24 09:46:38.831901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:01.055 [2024-07-24 09:46:38.831951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.055 [2024-07-24 09:46:38.831962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.055 [2024-07-24 09:46:38.831986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.055 [2024-07-24 09:46:38.831996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.055 [2024-07-24 09:46:38.832007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.055 [2024-07-24 09:46:38.832017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.832027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.055 [2024-07-24 09:46:38.832040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:21:01.055 [2024-07-24 09:46:38.832050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.856630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.856864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.055 [2024-07-24 09:46:38.857052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.555 ms 00:21:01.055 [2024-07-24 09:46:38.857104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.857270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.857364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.055 [2024-07-24 09:46:38.857440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:01.055 [2024-07-24 09:46:38.857480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.868430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.868610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.055 [2024-07-24 09:46:38.868766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:21:01.055 [2024-07-24 09:46:38.868804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.868878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.868911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.055 [2024-07-24 09:46:38.868941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:01.055 [2024-07-24 09:46:38.869034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.869563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.869614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.055 [2024-07-24 09:46:38.869702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:01.055 [2024-07-24 09:46:38.869736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.055 [2024-07-24 09:46:38.869881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.055 [2024-07-24 09:46:38.869978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.055 [2024-07-24 09:46:38.870081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:01.055 [2024-07-24 09:46:38.870117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.876098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.876267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.315 [2024-07-24 09:46:38.876348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.944 ms 00:21:01.315 [2024-07-24 09:46:38.876364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.878976] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:01.315 [2024-07-24 09:46:38.879011] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:01.315 [2024-07-24 09:46:38.879027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.879038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:01.315 [2024-07-24 09:46:38.879054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:21:01.315 [2024-07-24 09:46:38.879064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.891880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.891924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:01.315 [2024-07-24 09:46:38.891938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.790 ms 00:21:01.315 [2024-07-24 09:46:38.891955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.893973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.894008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:01.315 [2024-07-24 09:46:38.894020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:21:01.315 [2024-07-24 09:46:38.894030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.895549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.895580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:01.315 [2024-07-24 09:46:38.895591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:21:01.315 [2024-07-24 09:46:38.895600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.895897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.895928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.315 [2024-07-24 09:46:38.895940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:21:01.315 [2024-07-24 09:46:38.895950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.916331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.916396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.315 [2024-07-24 09:46:38.916413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.395 ms 00:21:01.315 [2024-07-24 09:46:38.916424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.922683] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:01.315 [2024-07-24 09:46:38.925857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.925888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.315 [2024-07-24 09:46:38.925901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.400 ms 00:21:01.315 [2024-07-24 09:46:38.925911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.926006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.926019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.315 [2024-07-24 09:46:38.926044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.315 [2024-07-24 09:46:38.926054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.927747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.927783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.315 [2024-07-24 09:46:38.927804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:21:01.315 [2024-07-24 09:46:38.927814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.927856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.927874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.315 [2024-07-24 09:46:38.927884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.315 [2024-07-24 09:46:38.927894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.927932] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.315 [2024-07-24 09:46:38.927944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.927962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.315 [2024-07-24 09:46:38.927973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:01.315 [2024-07-24 09:46:38.927982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.931737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.931772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.315 [2024-07-24 09:46:38.931795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:21:01.315 [2024-07-24 09:46:38.931804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.931880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.315 [2024-07-24 09:46:38.931892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.315 [2024-07-24 09:46:38.931903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:01.315 [2024-07-24 09:46:38.931912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.315 [2024-07-24 09:46:38.937114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.047 ms, result 0 00:21:36.392  Copying: 26/1024 [MB] (26 MBps) Copying: 55/1024 [MB] (29 MBps) Copying: 84/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (31 MBps) Copying: 145/1024 [MB] (29 MBps) Copying: 174/1024 [MB] (28 MBps) Copying: 203/1024 [MB] (29 MBps) Copying: 233/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (30 MBps) Copying: 293/1024 [MB] (29 MBps) Copying: 321/1024 [MB] (28 MBps) Copying: 351/1024 [MB] (29 MBps) Copying: 380/1024 [MB] (28 MBps) Copying: 408/1024 [MB] (27 MBps) Copying: 435/1024 [MB] (27 MBps) Copying: 464/1024 [MB] (28 MBps) Copying: 493/1024 [MB] (29 MBps) Copying: 524/1024 [MB] (30 MBps) Copying: 554/1024 [MB] (30 MBps) Copying: 584/1024 [MB] (29 MBps) Copying: 613/1024 [MB] (28 MBps) Copying: 641/1024 [MB] (28 MBps) Copying: 670/1024 [MB] (28 MBps) Copying: 701/1024 [MB] (31 MBps) Copying: 733/1024 [MB] (31 MBps) Copying: 764/1024 [MB] (30 MBps) Copying: 799/1024 [MB] (34 MBps) Copying: 831/1024 [MB] (32 MBps) Copying: 859/1024 [MB] (27 MBps) Copying: 887/1024 [MB] (28 MBps) Copying: 917/1024 [MB] (29 MBps) Copying: 946/1024 [MB] (28 MBps) Copying: 974/1024 [MB] (28 MBps) Copying: 1002/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 09:47:13.963064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.963139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:36.392 [2024-07-24 09:47:13.963157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:36.392 [2024-07-24 09:47:13.963168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.963220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.392 [2024-07-24 09:47:13.963913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.963934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:36.392 [2024-07-24 09:47:13.963946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:21:36.392 [2024-07-24 09:47:13.963957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.964365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.964381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:36.392 [2024-07-24 09:47:13.964394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:21:36.392 [2024-07-24 09:47:13.964405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.969442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.969494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:36.392 [2024-07-24 09:47:13.969508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:21:36.392 [2024-07-24 09:47:13.969520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.975525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.975566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:36.392 [2024-07-24 09:47:13.975579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.974 ms 00:21:36.392 [2024-07-24 09:47:13.975600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.977294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.977329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:36.392 [2024-07-24 09:47:13.977340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:21:36.392 [2024-07-24 09:47:13.977350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:13.981021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:13.981069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:36.392 [2024-07-24 09:47:13.981081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.647 ms 00:21:36.392 [2024-07-24 09:47:13.981091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.113556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:14.113612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:36.392 [2024-07-24 09:47:14.113629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 132.642 ms 00:21:36.392 [2024-07-24 09:47:14.113640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.115977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:14.116015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:36.392 [2024-07-24 09:47:14.116028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:21:36.392 [2024-07-24 09:47:14.116038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.117650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:14.117688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:36.392 [2024-07-24 09:47:14.117700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:21:36.392 [2024-07-24 09:47:14.117710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.118907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:14.118940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:36.392 [2024-07-24 09:47:14.118967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:21:36.392 [2024-07-24 09:47:14.118976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.120012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.392 [2024-07-24 09:47:14.120053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:36.392 [2024-07-24 09:47:14.120070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:21:36.392 [2024-07-24 09:47:14.120085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.392 [2024-07-24 09:47:14.120136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:36.392 [2024-07-24 09:47:14.120167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:21:36.392 [2024-07-24 09:47:14.120187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:36.392 [2024-07-24 09:47:14.120764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.120999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:36.393 [2024-07-24 09:47:14.121941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:36.393 [2024-07-24 09:47:14.121957] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d23807e-12d1-42c4-9707-0f26913c394b 00:21:36.393 [2024-07-24 09:47:14.121974] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:21:36.393 [2024-07-24 09:47:14.121989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24768 00:21:36.393 [2024-07-24 09:47:14.122016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23808 00:21:36.393 [2024-07-24 09:47:14.122031] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0403 00:21:36.393 [2024-07-24 09:47:14.122046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:36.393 [2024-07-24 09:47:14.122062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:36.393 [2024-07-24 09:47:14.122078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:36.393 [2024-07-24 09:47:14.122092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:36.393 [2024-07-24 09:47:14.122106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:36.393 [2024-07-24 09:47:14.122121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.393 [2024-07-24 09:47:14.122137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:36.393 [2024-07-24 09:47:14.122157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:21:36.393 [2024-07-24 09:47:14.122172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.124177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.393 [2024-07-24 09:47:14.124318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:36.393 [2024-07-24 09:47:14.124412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:21:36.393 [2024-07-24 09:47:14.124457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.124592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.393 [2024-07-24 09:47:14.124625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:36.393 [2024-07-24 09:47:14.124729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:36.393 [2024-07-24 09:47:14.124764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.130767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.130913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.393 [2024-07-24 09:47:14.130986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.131021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.131097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.131128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:36.393 [2024-07-24 09:47:14.131217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.131255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.131350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.131386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:36.393 [2024-07-24 09:47:14.131416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.131487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.131531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.131577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:36.393 [2024-07-24 09:47:14.131607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.131637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.143857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.144012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:36.393 [2024-07-24 09:47:14.144102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.144140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.152727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.152882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:36.393 [2024-07-24 09:47:14.152953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.153009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.153086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.393 [2024-07-24 09:47:14.153119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:36.393 [2024-07-24 09:47:14.153150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.393 [2024-07-24 09:47:14.153179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.393 [2024-07-24 09:47:14.153249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.394 [2024-07-24 09:47:14.153280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:36.394 [2024-07-24 09:47:14.153315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.394 [2024-07-24 09:47:14.153422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.394 [2024-07-24 09:47:14.153531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.394 [2024-07-24 09:47:14.153669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:36.394 [2024-07-24 09:47:14.153744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.394 [2024-07-24 09:47:14.153830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.394 [2024-07-24 09:47:14.153888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.394 [2024-07-24 09:47:14.153922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:36.394 [2024-07-24 09:47:14.153961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.394 [2024-07-24 09:47:14.153994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.394 [2024-07-24 09:47:14.154050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.394 [2024-07-24 09:47:14.154166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:36.394 [2024-07-24 09:47:14.154241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.394 [2024-07-24 09:47:14.154317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.394 [2024-07-24 09:47:14.154381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.394 [2024-07-24 09:47:14.154413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:36.394 [2024-07-24 09:47:14.154448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.394 [2024-07-24 09:47:14.154477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.394 [2024-07-24 09:47:14.154627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 191.851 ms, result 0 00:21:36.718 00:21:36.718 00:21:36.718 09:47:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:38.621 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:38.621 Process with pid 90703 is not found 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90703 00:21:38.621 09:47:16 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 90703 ']' 00:21:38.621 09:47:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 90703 00:21:38.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90703) - No such process 00:21:38.621 09:47:16 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 90703 is not found' 00:21:38.621 Remove shared memory files 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:38.621 09:47:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:38.621 ************************************ 00:21:38.621 END TEST ftl_restore 00:21:38.621 ************************************ 00:21:38.621 00:21:38.621 real 2m47.137s 00:21:38.621 user 2m35.707s 00:21:38.621 sys 0m12.828s 00:21:38.621 09:47:16 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.621 09:47:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:38.621 09:47:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:38.621 09:47:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:38.621 09:47:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.621 09:47:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:38.621 ************************************ 00:21:38.621 START TEST ftl_dirty_shutdown 00:21:38.621 ************************************ 00:21:38.621 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:38.879 * Looking for test storage... 00:21:38.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:38.879 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:38.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92507 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92507 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92507 ']' 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.880 09:47:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:38.880 [2024-07-24 09:47:16.677950] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:21:38.880 [2024-07-24 09:47:16.678326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92507 ] 00:21:39.138 [2024-07-24 09:47:16.835133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.138 [2024-07-24 09:47:16.879980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:39.703 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:40.270 09:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:40.270 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:40.270 { 00:21:40.270 "name": "nvme0n1", 00:21:40.270 "aliases": [ 00:21:40.270 "9c7ac23a-e7f9-4722-bae4-9e8af3f64bb9" 00:21:40.270 ], 00:21:40.270 "product_name": "NVMe disk", 00:21:40.270 "block_size": 4096, 00:21:40.270 "num_blocks": 1310720, 00:21:40.270 "uuid": "9c7ac23a-e7f9-4722-bae4-9e8af3f64bb9", 00:21:40.270 "assigned_rate_limits": { 00:21:40.270 "rw_ios_per_sec": 0, 00:21:40.270 "rw_mbytes_per_sec": 0, 00:21:40.270 "r_mbytes_per_sec": 0, 00:21:40.270 "w_mbytes_per_sec": 0 00:21:40.270 }, 00:21:40.270 "claimed": true, 00:21:40.270 "claim_type": "read_many_write_one", 00:21:40.270 "zoned": false, 00:21:40.270 "supported_io_types": { 00:21:40.270 "read": true, 00:21:40.270 "write": true, 00:21:40.270 "unmap": true, 00:21:40.270 "flush": true, 00:21:40.270 "reset": true, 00:21:40.270 "nvme_admin": true, 00:21:40.270 "nvme_io": true, 00:21:40.270 "nvme_io_md": false, 00:21:40.270 "write_zeroes": true, 00:21:40.270 "zcopy": false, 00:21:40.270 "get_zone_info": false, 00:21:40.270 "zone_management": false, 00:21:40.270 "zone_append": false, 00:21:40.270 "compare": true, 00:21:40.270 "compare_and_write": false, 00:21:40.270 "abort": true, 00:21:40.270 "seek_hole": false, 00:21:40.270 "seek_data": false, 00:21:40.270 "copy": true, 00:21:40.270 "nvme_iov_md": false 00:21:40.270 }, 00:21:40.270 "driver_specific": { 00:21:40.270 "nvme": [ 00:21:40.270 { 00:21:40.270 "pci_address": "0000:00:11.0", 00:21:40.270 "trid": { 00:21:40.270 "trtype": "PCIe", 00:21:40.270 "traddr": "0000:00:11.0" 00:21:40.270 }, 00:21:40.270 "ctrlr_data": { 00:21:40.270 "cntlid": 0, 00:21:40.270 "vendor_id": "0x1b36", 00:21:40.270 "model_number": "QEMU NVMe Ctrl", 00:21:40.270 "serial_number": "12341", 00:21:40.270 "firmware_revision": "8.0.0", 00:21:40.270 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:40.270 "oacs": { 00:21:40.270 "security": 0, 00:21:40.270 "format": 1, 00:21:40.270 "firmware": 0, 00:21:40.270 "ns_manage": 1 00:21:40.270 }, 00:21:40.270 "multi_ctrlr": false, 00:21:40.270 "ana_reporting": false 00:21:40.270 }, 00:21:40.270 "vs": { 00:21:40.270 "nvme_version": "1.4" 00:21:40.270 }, 00:21:40.270 "ns_data": { 00:21:40.270 "id": 1, 00:21:40.270 "can_share": false 00:21:40.270 } 00:21:40.270 } 00:21:40.270 ], 00:21:40.270 "mp_policy": "active_passive" 00:21:40.270 } 00:21:40.270 } 00:21:40.270 ]' 00:21:40.270 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:40.270 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:40.270 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:40.529 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:40.865 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=08b6f515-839d-46ef-bb5d-d111435ee391 00:21:40.865 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:40.865 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08b6f515-839d-46ef-bb5d-d111435ee391 00:21:40.865 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:41.123 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7893ff48-bc3f-4946-b200-19833def9d80 00:21:41.123 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7893ff48-bc3f-4946-b200-19833def9d80 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:41.382 09:47:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.382 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:41.382 { 00:21:41.382 "name": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:41.382 "aliases": [ 00:21:41.383 "lvs/nvme0n1p0" 00:21:41.383 ], 00:21:41.383 "product_name": "Logical Volume", 00:21:41.383 "block_size": 4096, 00:21:41.383 "num_blocks": 26476544, 00:21:41.383 "uuid": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:41.383 "assigned_rate_limits": { 00:21:41.383 "rw_ios_per_sec": 0, 00:21:41.383 "rw_mbytes_per_sec": 0, 00:21:41.383 "r_mbytes_per_sec": 0, 00:21:41.383 "w_mbytes_per_sec": 0 00:21:41.383 }, 00:21:41.383 "claimed": false, 00:21:41.383 "zoned": false, 00:21:41.383 "supported_io_types": { 00:21:41.383 "read": true, 00:21:41.383 "write": true, 00:21:41.383 "unmap": true, 00:21:41.383 "flush": false, 00:21:41.383 "reset": true, 00:21:41.383 "nvme_admin": false, 00:21:41.383 "nvme_io": false, 00:21:41.383 "nvme_io_md": false, 00:21:41.383 "write_zeroes": true, 00:21:41.383 "zcopy": false, 00:21:41.383 "get_zone_info": false, 00:21:41.383 "zone_management": false, 00:21:41.383 "zone_append": false, 00:21:41.383 "compare": false, 00:21:41.383 "compare_and_write": false, 00:21:41.383 "abort": false, 00:21:41.383 "seek_hole": true, 00:21:41.383 "seek_data": true, 00:21:41.383 "copy": false, 00:21:41.383 "nvme_iov_md": false 00:21:41.383 }, 00:21:41.383 "driver_specific": { 00:21:41.383 "lvol": { 00:21:41.383 "lvol_store_uuid": "7893ff48-bc3f-4946-b200-19833def9d80", 00:21:41.383 "base_bdev": "nvme0n1", 00:21:41.383 "thin_provision": true, 00:21:41.383 "num_allocated_clusters": 0, 00:21:41.383 "snapshot": false, 00:21:41.383 "clone": false, 00:21:41.383 "esnap_clone": false 00:21:41.383 } 00:21:41.383 } 00:21:41.383 } 00:21:41.383 ]' 00:21:41.383 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:41.642 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:41.900 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:42.158 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:42.158 { 00:21:42.158 "name": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:42.158 "aliases": [ 00:21:42.158 "lvs/nvme0n1p0" 00:21:42.158 ], 00:21:42.158 "product_name": "Logical Volume", 00:21:42.158 "block_size": 4096, 00:21:42.158 "num_blocks": 26476544, 00:21:42.158 "uuid": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:42.158 "assigned_rate_limits": { 00:21:42.158 "rw_ios_per_sec": 0, 00:21:42.158 "rw_mbytes_per_sec": 0, 00:21:42.158 "r_mbytes_per_sec": 0, 00:21:42.158 "w_mbytes_per_sec": 0 00:21:42.158 }, 00:21:42.158 "claimed": false, 00:21:42.158 "zoned": false, 00:21:42.158 "supported_io_types": { 00:21:42.158 "read": true, 00:21:42.158 "write": true, 00:21:42.158 "unmap": true, 00:21:42.158 "flush": false, 00:21:42.158 "reset": true, 00:21:42.158 "nvme_admin": false, 00:21:42.158 "nvme_io": false, 00:21:42.158 "nvme_io_md": false, 00:21:42.158 "write_zeroes": true, 00:21:42.158 "zcopy": false, 00:21:42.158 "get_zone_info": false, 00:21:42.158 "zone_management": false, 00:21:42.158 "zone_append": false, 00:21:42.158 "compare": false, 00:21:42.158 "compare_and_write": false, 00:21:42.158 "abort": false, 00:21:42.158 "seek_hole": true, 00:21:42.158 "seek_data": true, 00:21:42.158 "copy": false, 00:21:42.158 "nvme_iov_md": false 00:21:42.159 }, 00:21:42.159 "driver_specific": { 00:21:42.159 "lvol": { 00:21:42.159 "lvol_store_uuid": "7893ff48-bc3f-4946-b200-19833def9d80", 00:21:42.159 "base_bdev": "nvme0n1", 00:21:42.159 "thin_provision": true, 00:21:42.159 "num_allocated_clusters": 0, 00:21:42.159 "snapshot": false, 00:21:42.159 "clone": false, 00:21:42.159 "esnap_clone": false 00:21:42.159 } 00:21:42.159 } 00:21:42.159 } 00:21:42.159 ]' 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:42.159 09:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:42.416 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceb70124-89bf-4f13-8983-0374c0fa7001 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:42.675 { 00:21:42.675 "name": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:42.675 "aliases": [ 00:21:42.675 "lvs/nvme0n1p0" 00:21:42.675 ], 00:21:42.675 "product_name": "Logical Volume", 00:21:42.675 "block_size": 4096, 00:21:42.675 "num_blocks": 26476544, 00:21:42.675 "uuid": "ceb70124-89bf-4f13-8983-0374c0fa7001", 00:21:42.675 "assigned_rate_limits": { 00:21:42.675 "rw_ios_per_sec": 0, 00:21:42.675 "rw_mbytes_per_sec": 0, 00:21:42.675 "r_mbytes_per_sec": 0, 00:21:42.675 "w_mbytes_per_sec": 0 00:21:42.675 }, 00:21:42.675 "claimed": false, 00:21:42.675 "zoned": false, 00:21:42.675 "supported_io_types": { 00:21:42.675 "read": true, 00:21:42.675 "write": true, 00:21:42.675 "unmap": true, 00:21:42.675 "flush": false, 00:21:42.675 "reset": true, 00:21:42.675 "nvme_admin": false, 00:21:42.675 "nvme_io": false, 00:21:42.675 "nvme_io_md": false, 00:21:42.675 "write_zeroes": true, 00:21:42.675 "zcopy": false, 00:21:42.675 "get_zone_info": false, 00:21:42.675 "zone_management": false, 00:21:42.675 "zone_append": false, 00:21:42.675 "compare": false, 00:21:42.675 "compare_and_write": false, 00:21:42.675 "abort": false, 00:21:42.675 "seek_hole": true, 00:21:42.675 "seek_data": true, 00:21:42.675 "copy": false, 00:21:42.675 "nvme_iov_md": false 00:21:42.675 }, 00:21:42.675 "driver_specific": { 00:21:42.675 "lvol": { 00:21:42.675 "lvol_store_uuid": "7893ff48-bc3f-4946-b200-19833def9d80", 00:21:42.675 "base_bdev": "nvme0n1", 00:21:42.675 "thin_provision": true, 00:21:42.675 "num_allocated_clusters": 0, 00:21:42.675 "snapshot": false, 00:21:42.675 "clone": false, 00:21:42.675 "esnap_clone": false 00:21:42.675 } 00:21:42.675 } 00:21:42.675 } 00:21:42.675 ]' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ceb70124-89bf-4f13-8983-0374c0fa7001 --l2p_dram_limit 10' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:42.675 09:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ceb70124-89bf-4f13-8983-0374c0fa7001 --l2p_dram_limit 10 -c nvc0n1p0 00:21:42.935 [2024-07-24 09:47:20.641466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.641521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:42.935 [2024-07-24 09:47:20.641540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:42.935 [2024-07-24 09:47:20.641550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.641618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.641634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.935 [2024-07-24 09:47:20.641647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:42.935 [2024-07-24 09:47:20.641657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.641684] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:42.935 [2024-07-24 09:47:20.641940] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:42.935 [2024-07-24 09:47:20.641961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.641971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.935 [2024-07-24 09:47:20.641995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:21:42.935 [2024-07-24 09:47:20.642005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.642081] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 531d19f2-822e-4a37-8b81-722e0f888ffa 00:21:42.935 [2024-07-24 09:47:20.643479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.643509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:42.935 [2024-07-24 09:47:20.643522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:42.935 [2024-07-24 09:47:20.643534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.651128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.651172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.935 [2024-07-24 09:47:20.651204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.558 ms 00:21:42.935 [2024-07-24 09:47:20.651221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.651307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.651331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.935 [2024-07-24 09:47:20.651341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:42.935 [2024-07-24 09:47:20.651361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.651451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.651466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:42.935 [2024-07-24 09:47:20.651477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:42.935 [2024-07-24 09:47:20.651490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.935 [2024-07-24 09:47:20.651515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.935 [2024-07-24 09:47:20.653386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.935 [2024-07-24 09:47:20.653418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.935 [2024-07-24 09:47:20.653433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:21:42.935 [2024-07-24 09:47:20.653443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.936 [2024-07-24 09:47:20.653483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.936 [2024-07-24 09:47:20.653494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:42.936 [2024-07-24 09:47:20.653506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:42.936 [2024-07-24 09:47:20.653523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.936 [2024-07-24 09:47:20.653548] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:42.936 [2024-07-24 09:47:20.653686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:42.936 [2024-07-24 09:47:20.653704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:42.936 [2024-07-24 09:47:20.653718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:42.936 [2024-07-24 09:47:20.653733] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:42.936 [2024-07-24 09:47:20.653745] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:42.936 [2024-07-24 09:47:20.653768] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:42.936 [2024-07-24 09:47:20.653778] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:42.936 [2024-07-24 09:47:20.653790] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:42.936 [2024-07-24 09:47:20.653800] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:42.936 [2024-07-24 09:47:20.653823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.936 [2024-07-24 09:47:20.653833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:42.936 [2024-07-24 09:47:20.653853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:42.936 [2024-07-24 09:47:20.653863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.936 [2024-07-24 09:47:20.653940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.936 [2024-07-24 09:47:20.653951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:42.936 [2024-07-24 09:47:20.653969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:42.936 [2024-07-24 09:47:20.653979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.936 [2024-07-24 09:47:20.654073] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:42.936 [2024-07-24 09:47:20.654086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:42.936 [2024-07-24 09:47:20.654100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:42.936 [2024-07-24 09:47:20.654143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:42.936 [2024-07-24 09:47:20.654175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.936 [2024-07-24 09:47:20.654212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:42.936 [2024-07-24 09:47:20.654222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:42.936 [2024-07-24 09:47:20.654234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.936 [2024-07-24 09:47:20.654244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:42.936 [2024-07-24 09:47:20.654258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:42.936 [2024-07-24 09:47:20.654267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:42.936 [2024-07-24 09:47:20.654289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:42.936 [2024-07-24 09:47:20.654322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:42.936 [2024-07-24 09:47:20.654352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:42.936 [2024-07-24 09:47:20.654384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:42.936 [2024-07-24 09:47:20.654414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:42.936 [2024-07-24 09:47:20.654448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.936 [2024-07-24 09:47:20.654468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:42.936 [2024-07-24 09:47:20.654478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:42.936 [2024-07-24 09:47:20.654489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.936 [2024-07-24 09:47:20.654498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:42.936 [2024-07-24 09:47:20.654509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:42.936 [2024-07-24 09:47:20.654518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:42.936 [2024-07-24 09:47:20.654538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:42.936 [2024-07-24 09:47:20.654550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654558] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:42.936 [2024-07-24 09:47:20.654570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:42.936 [2024-07-24 09:47:20.654580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.936 [2024-07-24 09:47:20.654609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:42.936 [2024-07-24 09:47:20.654621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:42.936 [2024-07-24 09:47:20.654629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:42.936 [2024-07-24 09:47:20.654641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:42.936 [2024-07-24 09:47:20.654650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:42.936 [2024-07-24 09:47:20.654662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:42.936 [2024-07-24 09:47:20.654675] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:42.936 [2024-07-24 09:47:20.654689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.936 [2024-07-24 09:47:20.654701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:42.936 [2024-07-24 09:47:20.654714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:42.936 [2024-07-24 09:47:20.654724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:42.936 [2024-07-24 09:47:20.654736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:42.936 [2024-07-24 09:47:20.654747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:42.936 [2024-07-24 09:47:20.654759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:42.936 [2024-07-24 09:47:20.654769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:42.936 [2024-07-24 09:47:20.654784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:42.937 [2024-07-24 09:47:20.654794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:42.937 [2024-07-24 09:47:20.654807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:42.937 [2024-07-24 09:47:20.654862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:42.937 [2024-07-24 09:47:20.654875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:42.937 [2024-07-24 09:47:20.654900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:42.937 [2024-07-24 09:47:20.654910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:42.937 [2024-07-24 09:47:20.654923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:42.937 [2024-07-24 09:47:20.654934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.937 [2024-07-24 09:47:20.654946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:42.937 [2024-07-24 09:47:20.654956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:21:42.937 [2024-07-24 09:47:20.654971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.937 [2024-07-24 09:47:20.655015] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:42.937 [2024-07-24 09:47:20.655030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:46.221 [2024-07-24 09:47:24.018318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.221 [2024-07-24 09:47:24.018383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:46.221 [2024-07-24 09:47:24.018403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3368.763 ms 00:21:46.221 [2024-07-24 09:47:24.018416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.221 [2024-07-24 09:47:24.029485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.221 [2024-07-24 09:47:24.029536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.221 [2024-07-24 09:47:24.029564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.989 ms 00:21:46.221 [2024-07-24 09:47:24.029581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.221 [2024-07-24 09:47:24.029670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.221 [2024-07-24 09:47:24.029686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.221 [2024-07-24 09:47:24.029696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:46.221 [2024-07-24 09:47:24.029709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.040156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.040226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.481 [2024-07-24 09:47:24.040246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.403 ms 00:21:46.481 [2024-07-24 09:47:24.040268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.040313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.040327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.481 [2024-07-24 09:47:24.040338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.481 [2024-07-24 09:47:24.040350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.040811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.040834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.481 [2024-07-24 09:47:24.040845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:21:46.481 [2024-07-24 09:47:24.040860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.040958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.040974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.481 [2024-07-24 09:47:24.040995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:46.481 [2024-07-24 09:47:24.041017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.048126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.048171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.481 [2024-07-24 09:47:24.048183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.099 ms 00:21:46.481 [2024-07-24 09:47:24.048233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.055941] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:46.481 [2024-07-24 09:47:24.059130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.059167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:46.481 [2024-07-24 09:47:24.059184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.842 ms 00:21:46.481 [2024-07-24 09:47:24.059205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.141443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.141507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:46.481 [2024-07-24 09:47:24.141524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.334 ms 00:21:46.481 [2024-07-24 09:47:24.141535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.141724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.141737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:46.481 [2024-07-24 09:47:24.141750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:46.481 [2024-07-24 09:47:24.141760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.145372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.145409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:46.481 [2024-07-24 09:47:24.145427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:21:46.481 [2024-07-24 09:47:24.145438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.148212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.148243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:46.481 [2024-07-24 09:47:24.148259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:21:46.481 [2024-07-24 09:47:24.148268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.148531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.148551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:46.481 [2024-07-24 09:47:24.148565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:21:46.481 [2024-07-24 09:47:24.148574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.188605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.188663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:46.481 [2024-07-24 09:47:24.188681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.063 ms 00:21:46.481 [2024-07-24 09:47:24.188691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.193296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.193338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:46.481 [2024-07-24 09:47:24.193354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:21:46.481 [2024-07-24 09:47:24.193373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.196591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.196624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:46.481 [2024-07-24 09:47:24.196639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:21:46.481 [2024-07-24 09:47:24.196649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.200105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.200140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:46.481 [2024-07-24 09:47:24.200156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.420 ms 00:21:46.481 [2024-07-24 09:47:24.200167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.200226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.200238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:46.481 [2024-07-24 09:47:24.200252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:46.481 [2024-07-24 09:47:24.200262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.200326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.481 [2024-07-24 09:47:24.200337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:46.481 [2024-07-24 09:47:24.200353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:46.481 [2024-07-24 09:47:24.200363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.481 [2024-07-24 09:47:24.201485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3565.315 ms, result 0 00:21:46.481 { 00:21:46.481 "name": "ftl0", 00:21:46.481 "uuid": "531d19f2-822e-4a37-8b81-722e0f888ffa" 00:21:46.481 } 00:21:46.481 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:46.481 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:46.740 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:46.740 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:46.740 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:46.999 /dev/nbd0 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:46.999 1+0 records in 00:21:46.999 1+0 records out 00:21:46.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338682 s, 12.1 MB/s 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:46.999 09:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:46.999 [2024-07-24 09:47:24.724953] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:21:46.999 [2024-07-24 09:47:24.725094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92648 ] 00:21:47.262 [2024-07-24 09:47:24.890084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.262 [2024-07-24 09:47:24.937583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.389  Copying: 210/1024 [MB] (210 MBps) Copying: 422/1024 [MB] (211 MBps) Copying: 635/1024 [MB] (212 MBps) Copying: 844/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 211 MBps) 00:21:52.389 00:21:52.389 09:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:54.291 09:47:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:54.291 [2024-07-24 09:47:31.882651] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:21:54.291 [2024-07-24 09:47:31.882784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92725 ] 00:21:54.291 [2024-07-24 09:47:32.049806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.291 [2024-07-24 09:47:32.095303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.905  Copying: 19/1024 [MB] (19 MBps) Copying: 38/1024 [MB] (18 MBps) Copying: 56/1024 [MB] (18 MBps) Copying: 74/1024 [MB] (17 MBps) Copying: 93/1024 [MB] (18 MBps) Copying: 111/1024 [MB] (18 MBps) Copying: 130/1024 [MB] (18 MBps) Copying: 148/1024 [MB] (18 MBps) Copying: 165/1024 [MB] (17 MBps) Copying: 184/1024 [MB] (18 MBps) Copying: 202/1024 [MB] (18 MBps) Copying: 221/1024 [MB] (18 MBps) Copying: 240/1024 [MB] (18 MBps) Copying: 258/1024 [MB] (18 MBps) Copying: 276/1024 [MB] (18 MBps) Copying: 296/1024 [MB] (19 MBps) Copying: 314/1024 [MB] (18 MBps) Copying: 334/1024 [MB] (19 MBps) Copying: 353/1024 [MB] (19 MBps) Copying: 372/1024 [MB] (18 MBps) Copying: 391/1024 [MB] (19 MBps) Copying: 410/1024 [MB] (19 MBps) Copying: 429/1024 [MB] (19 MBps) Copying: 449/1024 [MB] (19 MBps) Copying: 468/1024 [MB] (19 MBps) Copying: 488/1024 [MB] (19 MBps) Copying: 508/1024 [MB] (20 MBps) Copying: 529/1024 [MB] (20 MBps) Copying: 549/1024 [MB] (20 MBps) Copying: 569/1024 [MB] (19 MBps) Copying: 587/1024 [MB] (18 MBps) Copying: 606/1024 [MB] (18 MBps) Copying: 625/1024 [MB] (18 MBps) Copying: 643/1024 [MB] (18 MBps) Copying: 661/1024 [MB] (17 MBps) Copying: 679/1024 [MB] (18 MBps) Copying: 697/1024 [MB] (17 MBps) Copying: 715/1024 [MB] (17 MBps) Copying: 733/1024 [MB] (18 MBps) Copying: 751/1024 [MB] (18 MBps) Copying: 771/1024 [MB] (19 MBps) Copying: 789/1024 [MB] (18 MBps) Copying: 808/1024 [MB] (18 MBps) Copying: 828/1024 [MB] (20 MBps) Copying: 849/1024 [MB] (20 MBps) Copying: 868/1024 [MB] (19 MBps) Copying: 887/1024 [MB] (19 MBps) Copying: 906/1024 [MB] (18 MBps) Copying: 923/1024 [MB] (16 MBps) Copying: 942/1024 [MB] (19 MBps) Copying: 961/1024 [MB] (18 MBps) Copying: 980/1024 [MB] (19 MBps) Copying: 1000/1024 [MB] (19 MBps) Copying: 1019/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 18 MBps) 00:22:48.905 00:22:48.905 09:48:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:48.905 09:48:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:49.164 09:48:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:49.425 [2024-07-24 09:48:27.021945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.022020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:49.425 [2024-07-24 09:48:27.022038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.425 [2024-07-24 09:48:27.022052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.022082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:49.425 [2024-07-24 09:48:27.022804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.022823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:49.425 [2024-07-24 09:48:27.022840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:22:49.425 [2024-07-24 09:48:27.022859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.024800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.024845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:49.425 [2024-07-24 09:48:27.024862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:22:49.425 [2024-07-24 09:48:27.024873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.043106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.043171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:49.425 [2024-07-24 09:48:27.043204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:22:49.425 [2024-07-24 09:48:27.043216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.048893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.048953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:49.425 [2024-07-24 09:48:27.048972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.619 ms 00:22:49.425 [2024-07-24 09:48:27.048999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.050929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.050975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:49.425 [2024-07-24 09:48:27.050995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:22:49.425 [2024-07-24 09:48:27.051005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.055869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.055919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:49.425 [2024-07-24 09:48:27.055937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.827 ms 00:22:49.425 [2024-07-24 09:48:27.055948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.056077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.056091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:49.425 [2024-07-24 09:48:27.056105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:49.425 [2024-07-24 09:48:27.056116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.058135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.058176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:49.425 [2024-07-24 09:48:27.058206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:22:49.425 [2024-07-24 09:48:27.058218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.059658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.059694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:49.425 [2024-07-24 09:48:27.059713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:22:49.425 [2024-07-24 09:48:27.059723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.060831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.060865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:49.425 [2024-07-24 09:48:27.060880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:22:49.425 [2024-07-24 09:48:27.060890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.425 [2024-07-24 09:48:27.062083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.425 [2024-07-24 09:48:27.062120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:49.425 [2024-07-24 09:48:27.062136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:22:49.426 [2024-07-24 09:48:27.062146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.426 [2024-07-24 09:48:27.062182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:49.426 [2024-07-24 09:48:27.062229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.062997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:49.426 [2024-07-24 09:48:27.063848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.063900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.063950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:49.427 [2024-07-24 09:48:27.064955] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:49.427 [2024-07-24 09:48:27.064975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 531d19f2-822e-4a37-8b81-722e0f888ffa 00:22:49.427 [2024-07-24 09:48:27.064988] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:49.427 [2024-07-24 09:48:27.065002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:49.427 [2024-07-24 09:48:27.065012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:49.427 [2024-07-24 09:48:27.065026] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:49.427 [2024-07-24 09:48:27.065036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:49.427 [2024-07-24 09:48:27.065062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:49.427 [2024-07-24 09:48:27.065073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:49.427 [2024-07-24 09:48:27.065086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:49.427 [2024-07-24 09:48:27.065096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:49.427 [2024-07-24 09:48:27.065110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.427 [2024-07-24 09:48:27.065125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:49.427 [2024-07-24 09:48:27.065139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:22:49.427 [2024-07-24 09:48:27.065150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.067275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.427 [2024-07-24 09:48:27.067299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:49.427 [2024-07-24 09:48:27.067317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:22:49.427 [2024-07-24 09:48:27.067328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.067471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.427 [2024-07-24 09:48:27.067483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:49.427 [2024-07-24 09:48:27.067498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:49.427 [2024-07-24 09:48:27.067509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.075059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.075254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.427 [2024-07-24 09:48:27.075373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.075412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.075531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.075565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.427 [2024-07-24 09:48:27.075652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.075690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.075872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.075976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.427 [2024-07-24 09:48:27.076016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.076095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.076152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.076199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.427 [2024-07-24 09:48:27.076289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.076325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.091023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.091302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.427 [2024-07-24 09:48:27.091335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.091351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.427 [2024-07-24 09:48:27.100388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.100399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.427 [2024-07-24 09:48:27.100575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.100586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.427 [2024-07-24 09:48:27.100672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.100683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.427 [2024-07-24 09:48:27.100796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.100807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:49.427 [2024-07-24 09:48:27.100877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.427 [2024-07-24 09:48:27.100887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.427 [2024-07-24 09:48:27.100931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.427 [2024-07-24 09:48:27.100966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.427 [2024-07-24 09:48:27.100983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.428 [2024-07-24 09:48:27.100993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.428 [2024-07-24 09:48:27.101068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.428 [2024-07-24 09:48:27.101084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.428 [2024-07-24 09:48:27.101102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.428 [2024-07-24 09:48:27.101112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.428 [2024-07-24 09:48:27.101359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.483 ms, result 0 00:22:49.428 true 00:22:49.428 09:48:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92507 00:22:49.428 09:48:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92507 00:22:49.428 09:48:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:49.428 [2024-07-24 09:48:27.227443] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:22:49.428 [2024-07-24 09:48:27.227597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93291 ] 00:22:49.686 [2024-07-24 09:48:27.394036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.686 [2024-07-24 09:48:27.442695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.052  Copying: 204/1024 [MB] (204 MBps) Copying: 409/1024 [MB] (204 MBps) Copying: 615/1024 [MB] (206 MBps) Copying: 816/1024 [MB] (200 MBps) Copying: 1016/1024 [MB] (200 MBps) Copying: 1024/1024 [MB] (average 203 MBps) 00:22:55.052 00:22:55.052 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92507 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:55.052 09:48:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.310 [2024-07-24 09:48:32.880059] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:22:55.310 [2024-07-24 09:48:32.880219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93352 ] 00:22:55.310 [2024-07-24 09:48:33.053033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.310 [2024-07-24 09:48:33.104276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.568 [2024-07-24 09:48:33.210004] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.568 [2024-07-24 09:48:33.210091] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.568 [2024-07-24 09:48:33.275265] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:55.568 [2024-07-24 09:48:33.275527] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:55.568 [2024-07-24 09:48:33.275653] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:55.826 [2024-07-24 09:48:33.495129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.826 [2024-07-24 09:48:33.495229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.826 [2024-07-24 09:48:33.495248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:55.826 [2024-07-24 09:48:33.495259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.826 [2024-07-24 09:48:33.495339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.826 [2024-07-24 09:48:33.495359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.826 [2024-07-24 09:48:33.495372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:55.826 [2024-07-24 09:48:33.495382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.826 [2024-07-24 09:48:33.495415] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.826 [2024-07-24 09:48:33.495851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.826 [2024-07-24 09:48:33.495880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.826 [2024-07-24 09:48:33.495892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.826 [2024-07-24 09:48:33.495907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:55.826 [2024-07-24 09:48:33.495918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.826 [2024-07-24 09:48:33.497539] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:55.826 [2024-07-24 09:48:33.500209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.826 [2024-07-24 09:48:33.500260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:55.826 [2024-07-24 09:48:33.500276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:22:55.826 [2024-07-24 09:48:33.500288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.500378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.500397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:55.827 [2024-07-24 09:48:33.500413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:55.827 [2024-07-24 09:48:33.500424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.507702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.507905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.827 [2024-07-24 09:48:33.508012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.211 ms 00:22:55.827 [2024-07-24 09:48:33.508048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.508271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.508387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.827 [2024-07-24 09:48:33.508468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:55.827 [2024-07-24 09:48:33.508507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.508659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.508757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:55.827 [2024-07-24 09:48:33.508836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:55.827 [2024-07-24 09:48:33.508915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.509009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:55.827 [2024-07-24 09:48:33.510875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.511023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.827 [2024-07-24 09:48:33.511043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:22:55.827 [2024-07-24 09:48:33.511054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.511106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.511118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:55.827 [2024-07-24 09:48:33.511130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:55.827 [2024-07-24 09:48:33.511140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.511178] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:55.827 [2024-07-24 09:48:33.511221] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:55.827 [2024-07-24 09:48:33.511273] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:55.827 [2024-07-24 09:48:33.511293] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:55.827 [2024-07-24 09:48:33.511382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:55.827 [2024-07-24 09:48:33.511397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:55.827 [2024-07-24 09:48:33.511410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:55.827 [2024-07-24 09:48:33.511425] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511437] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511460] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:55.827 [2024-07-24 09:48:33.511470] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:55.827 [2024-07-24 09:48:33.511481] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:55.827 [2024-07-24 09:48:33.511491] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:55.827 [2024-07-24 09:48:33.511502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.511513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:55.827 [2024-07-24 09:48:33.511524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:22:55.827 [2024-07-24 09:48:33.511534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.511605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.827 [2024-07-24 09:48:33.511620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:55.827 [2024-07-24 09:48:33.511631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:55.827 [2024-07-24 09:48:33.511644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.827 [2024-07-24 09:48:33.511733] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:55.827 [2024-07-24 09:48:33.511746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:55.827 [2024-07-24 09:48:33.511757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:55.827 [2024-07-24 09:48:33.511789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:55.827 [2024-07-24 09:48:33.511830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.827 [2024-07-24 09:48:33.511852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:55.827 [2024-07-24 09:48:33.511862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:55.827 [2024-07-24 09:48:33.511872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.827 [2024-07-24 09:48:33.511888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:55.827 [2024-07-24 09:48:33.511899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:55.827 [2024-07-24 09:48:33.511908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:55.827 [2024-07-24 09:48:33.511928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:55.827 [2024-07-24 09:48:33.511957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.827 [2024-07-24 09:48:33.511976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:55.827 [2024-07-24 09:48:33.511986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:55.827 [2024-07-24 09:48:33.511996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.827 [2024-07-24 09:48:33.512005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:55.827 [2024-07-24 09:48:33.512015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:55.827 [2024-07-24 09:48:33.512024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.827 [2024-07-24 09:48:33.512034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:55.827 [2024-07-24 09:48:33.512054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:55.827 [2024-07-24 09:48:33.512065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.827 [2024-07-24 09:48:33.512075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:55.827 [2024-07-24 09:48:33.512084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:55.827 [2024-07-24 09:48:33.512094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.827 [2024-07-24 09:48:33.512103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:55.827 [2024-07-24 09:48:33.512113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:55.827 [2024-07-24 09:48:33.512122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.827 [2024-07-24 09:48:33.512131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:55.827 [2024-07-24 09:48:33.512141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:55.827 [2024-07-24 09:48:33.512150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.512160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:55.827 [2024-07-24 09:48:33.512170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:55.827 [2024-07-24 09:48:33.512180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.827 [2024-07-24 09:48:33.512201] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:55.827 [2024-07-24 09:48:33.512220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:55.828 [2024-07-24 09:48:33.512234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.828 [2024-07-24 09:48:33.512252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.828 [2024-07-24 09:48:33.512266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:55.828 [2024-07-24 09:48:33.512276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:55.828 [2024-07-24 09:48:33.512286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:55.828 [2024-07-24 09:48:33.512296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:55.828 [2024-07-24 09:48:33.512327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:55.828 [2024-07-24 09:48:33.512337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:55.828 [2024-07-24 09:48:33.512348] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:55.828 [2024-07-24 09:48:33.512360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:55.828 [2024-07-24 09:48:33.512387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:55.828 [2024-07-24 09:48:33.512397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:55.828 [2024-07-24 09:48:33.512408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:55.828 [2024-07-24 09:48:33.512419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:55.828 [2024-07-24 09:48:33.512430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:55.828 [2024-07-24 09:48:33.512443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:55.828 [2024-07-24 09:48:33.512454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:55.828 [2024-07-24 09:48:33.512465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:55.828 [2024-07-24 09:48:33.512475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:55.828 [2024-07-24 09:48:33.512529] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:55.828 [2024-07-24 09:48:33.512540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:55.828 [2024-07-24 09:48:33.512563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:55.828 [2024-07-24 09:48:33.512573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:55.828 [2024-07-24 09:48:33.512584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:55.828 [2024-07-24 09:48:33.512595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.512605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:55.828 [2024-07-24 09:48:33.512626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:22:55.828 [2024-07-24 09:48:33.512637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.535712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.535774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.828 [2024-07-24 09:48:33.535797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:22:55.828 [2024-07-24 09:48:33.535822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.535935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.535947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:55.828 [2024-07-24 09:48:33.535959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:55.828 [2024-07-24 09:48:33.535970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.548024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.548089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.828 [2024-07-24 09:48:33.548105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.986 ms 00:22:55.828 [2024-07-24 09:48:33.548116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.548214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.548227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.828 [2024-07-24 09:48:33.548239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.828 [2024-07-24 09:48:33.548249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.548771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.548792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.828 [2024-07-24 09:48:33.548807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:22:55.828 [2024-07-24 09:48:33.548817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.548955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.548972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.828 [2024-07-24 09:48:33.548984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:55.828 [2024-07-24 09:48:33.548994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.555175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.555240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.828 [2024-07-24 09:48:33.555256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.168 ms 00:22:55.828 [2024-07-24 09:48:33.555271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.558373] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:55.828 [2024-07-24 09:48:33.558436] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:55.828 [2024-07-24 09:48:33.558454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.558465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:55.828 [2024-07-24 09:48:33.558499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:22:55.828 [2024-07-24 09:48:33.558511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.572286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.572368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:55.828 [2024-07-24 09:48:33.572387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:22:55.828 [2024-07-24 09:48:33.572421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.575119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.575181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:55.828 [2024-07-24 09:48:33.575212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:22:55.828 [2024-07-24 09:48:33.575224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.576813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.576854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:55.828 [2024-07-24 09:48:33.576868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:22:55.828 [2024-07-24 09:48:33.576879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.577261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.577286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:55.828 [2024-07-24 09:48:33.577299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:55.828 [2024-07-24 09:48:33.577310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.598240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.828 [2024-07-24 09:48:33.598316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:55.828 [2024-07-24 09:48:33.598332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.940 ms 00:22:55.828 [2024-07-24 09:48:33.598380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.828 [2024-07-24 09:48:33.606588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:55.829 [2024-07-24 09:48:33.610281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.610326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:55.829 [2024-07-24 09:48:33.610349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.845 ms 00:22:55.829 [2024-07-24 09:48:33.610359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.610476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.610493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:55.829 [2024-07-24 09:48:33.610505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:55.829 [2024-07-24 09:48:33.610515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.610588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.610601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:55.829 [2024-07-24 09:48:33.610611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:55.829 [2024-07-24 09:48:33.610621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.610650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.610671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:55.829 [2024-07-24 09:48:33.610682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:55.829 [2024-07-24 09:48:33.610692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.610728] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:55.829 [2024-07-24 09:48:33.610740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.610756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:55.829 [2024-07-24 09:48:33.610770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:55.829 [2024-07-24 09:48:33.610780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.614895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.614948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:55.829 [2024-07-24 09:48:33.614972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.101 ms 00:22:55.829 [2024-07-24 09:48:33.614982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.615054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.829 [2024-07-24 09:48:33.615067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:55.829 [2024-07-24 09:48:33.615079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:55.829 [2024-07-24 09:48:33.615089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.829 [2024-07-24 09:48:33.616533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.134 ms, result 0 00:23:30.343  Copying: 33/1024 [MB] (33 MBps) Copying: 67/1024 [MB] (34 MBps) Copying: 99/1024 [MB] (31 MBps) Copying: 133/1024 [MB] (33 MBps) Copying: 162/1024 [MB] (29 MBps) Copying: 194/1024 [MB] (31 MBps) Copying: 225/1024 [MB] (31 MBps) Copying: 255/1024 [MB] (29 MBps) Copying: 283/1024 [MB] (27 MBps) Copying: 310/1024 [MB] (27 MBps) Copying: 341/1024 [MB] (30 MBps) Copying: 370/1024 [MB] (29 MBps) Copying: 400/1024 [MB] (29 MBps) Copying: 431/1024 [MB] (30 MBps) Copying: 461/1024 [MB] (30 MBps) Copying: 492/1024 [MB] (30 MBps) Copying: 525/1024 [MB] (32 MBps) Copying: 557/1024 [MB] (32 MBps) Copying: 588/1024 [MB] (31 MBps) Copying: 619/1024 [MB] (30 MBps) Copying: 649/1024 [MB] (30 MBps) Copying: 679/1024 [MB] (30 MBps) Copying: 710/1024 [MB] (30 MBps) Copying: 740/1024 [MB] (29 MBps) Copying: 769/1024 [MB] (29 MBps) Copying: 800/1024 [MB] (30 MBps) Copying: 831/1024 [MB] (30 MBps) Copying: 860/1024 [MB] (29 MBps) Copying: 892/1024 [MB] (31 MBps) Copying: 921/1024 [MB] (29 MBps) Copying: 951/1024 [MB] (30 MBps) Copying: 980/1024 [MB] (28 MBps) Copying: 1008/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 09:49:07.910250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.343 [2024-07-24 09:49:07.910320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.343 [2024-07-24 09:49:07.910347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:30.343 [2024-07-24 09:49:07.910368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.343 [2024-07-24 09:49:07.911290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.344 [2024-07-24 09:49:07.912767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.912805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.344 [2024-07-24 09:49:07.912831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:23:30.344 [2024-07-24 09:49:07.912841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:07.923564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.923607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.344 [2024-07-24 09:49:07.923627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.949 ms 00:23:30.344 [2024-07-24 09:49:07.923645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:07.947502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.947561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.344 [2024-07-24 09:49:07.947577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.877 ms 00:23:30.344 [2024-07-24 09:49:07.947588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:07.952758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.952793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.344 [2024-07-24 09:49:07.952807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:23:30.344 [2024-07-24 09:49:07.952831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:07.954438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.954474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.344 [2024-07-24 09:49:07.954487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:23:30.344 [2024-07-24 09:49:07.954496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:07.958153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:07.958206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.344 [2024-07-24 09:49:07.958226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:23:30.344 [2024-07-24 09:49:07.958243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.068184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:08.068275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.344 [2024-07-24 09:49:08.068301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.076 ms 00:23:30.344 [2024-07-24 09:49:08.068311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.070927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:08.070970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:30.344 [2024-07-24 09:49:08.070983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:23:30.344 [2024-07-24 09:49:08.070993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.072616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:08.072664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:30.344 [2024-07-24 09:49:08.072676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:23:30.344 [2024-07-24 09:49:08.072685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.073853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:08.073887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.344 [2024-07-24 09:49:08.073898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:23:30.344 [2024-07-24 09:49:08.073908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.075108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.344 [2024-07-24 09:49:08.075144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.344 [2024-07-24 09:49:08.075156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:23:30.344 [2024-07-24 09:49:08.075166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.344 [2024-07-24 09:49:08.075208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.344 [2024-07-24 09:49:08.075228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104192 / 261120 wr_cnt: 1 state: open 00:23:30.344 [2024-07-24 09:49:08.075242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.344 [2024-07-24 09:49:08.075740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.075993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.076926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.077084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.345 [2024-07-24 09:49:08.077266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.345 [2024-07-24 09:49:08.077297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 531d19f2-822e-4a37-8b81-722e0f888ffa 00:23:30.345 [2024-07-24 09:49:08.077343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104192 00:23:30.345 [2024-07-24 09:49:08.077371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105152 00:23:30.345 [2024-07-24 09:49:08.077399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104192 00:23:30.345 [2024-07-24 09:49:08.077429] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:23:30.345 [2024-07-24 09:49:08.077504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.345 [2024-07-24 09:49:08.077549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.345 [2024-07-24 09:49:08.077578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.345 [2024-07-24 09:49:08.077606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.345 [2024-07-24 09:49:08.077617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.345 [2024-07-24 09:49:08.077628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.345 [2024-07-24 09:49:08.077638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.345 [2024-07-24 09:49:08.077649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.426 ms 00:23:30.345 [2024-07-24 09:49:08.077659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.079380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.345 [2024-07-24 09:49:08.079414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.345 [2024-07-24 09:49:08.079425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:23:30.345 [2024-07-24 09:49:08.079439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.079541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.345 [2024-07-24 09:49:08.079553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.345 [2024-07-24 09:49:08.079563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:30.345 [2024-07-24 09:49:08.079573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.085647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.345 [2024-07-24 09:49:08.085764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.345 [2024-07-24 09:49:08.085830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.345 [2024-07-24 09:49:08.085864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.085935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.345 [2024-07-24 09:49:08.085967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.345 [2024-07-24 09:49:08.086008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.345 [2024-07-24 09:49:08.086036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.086121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.345 [2024-07-24 09:49:08.086220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.345 [2024-07-24 09:49:08.086263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.345 [2024-07-24 09:49:08.086293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.086334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.345 [2024-07-24 09:49:08.086364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.345 [2024-07-24 09:49:08.086393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.345 [2024-07-24 09:49:08.086513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.099567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.345 [2024-07-24 09:49:08.099810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.345 [2024-07-24 09:49:08.099935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.345 [2024-07-24 09:49:08.099970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.345 [2024-07-24 09:49:08.108198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.108375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.346 [2024-07-24 09:49:08.108457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.108491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.108569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.108601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.346 [2024-07-24 09:49:08.108631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.108660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.108712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.108796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.346 [2024-07-24 09:49:08.108831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.108860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.108976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.109102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.346 [2024-07-24 09:49:08.109138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.109167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.109271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.109421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:30.346 [2024-07-24 09:49:08.109451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.109480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.109602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.109638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.346 [2024-07-24 09:49:08.109667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.109746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.109831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.346 [2024-07-24 09:49:08.109865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.346 [2024-07-24 09:49:08.109885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.346 [2024-07-24 09:49:08.109895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.346 [2024-07-24 09:49:08.110015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 202.308 ms, result 0 00:23:31.279 00:23:31.279 00:23:31.279 09:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:33.181 09:49:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:33.181 [2024-07-24 09:49:10.760332] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:23:33.181 [2024-07-24 09:49:10.760468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93732 ] 00:23:33.181 [2024-07-24 09:49:10.929212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.181 [2024-07-24 09:49:10.980709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.450 [2024-07-24 09:49:11.084555] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.450 [2024-07-24 09:49:11.084640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.450 [2024-07-24 09:49:11.243862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.243923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:33.450 [2024-07-24 09:49:11.243949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:33.450 [2024-07-24 09:49:11.243964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.244050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.244075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.450 [2024-07-24 09:49:11.244091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:33.450 [2024-07-24 09:49:11.244109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.244147] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:33.450 [2024-07-24 09:49:11.244524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:33.450 [2024-07-24 09:49:11.244564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.244583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.450 [2024-07-24 09:49:11.244599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:23:33.450 [2024-07-24 09:49:11.244617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.246392] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:33.450 [2024-07-24 09:49:11.249094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.249164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:33.450 [2024-07-24 09:49:11.249200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:23:33.450 [2024-07-24 09:49:11.249220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.249313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.249338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:33.450 [2024-07-24 09:49:11.249358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:33.450 [2024-07-24 09:49:11.249377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.256358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.256396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.450 [2024-07-24 09:49:11.256423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:23:33.450 [2024-07-24 09:49:11.256439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.256579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.256602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.450 [2024-07-24 09:49:11.256626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:33.450 [2024-07-24 09:49:11.256642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.450 [2024-07-24 09:49:11.256720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.450 [2024-07-24 09:49:11.256748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:33.450 [2024-07-24 09:49:11.256765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:33.450 [2024-07-24 09:49:11.256794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.451 [2024-07-24 09:49:11.256847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:33.451 [2024-07-24 09:49:11.258801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.451 [2024-07-24 09:49:11.258836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.451 [2024-07-24 09:49:11.258859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:23:33.451 [2024-07-24 09:49:11.258875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.451 [2024-07-24 09:49:11.258943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.451 [2024-07-24 09:49:11.258962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:33.451 [2024-07-24 09:49:11.258980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:33.451 [2024-07-24 09:49:11.259007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.451 [2024-07-24 09:49:11.259042] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:33.451 [2024-07-24 09:49:11.259088] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:33.451 [2024-07-24 09:49:11.259145] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:33.451 [2024-07-24 09:49:11.259180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:33.451 [2024-07-24 09:49:11.259337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:33.451 [2024-07-24 09:49:11.259361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:33.451 [2024-07-24 09:49:11.259399] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:33.451 [2024-07-24 09:49:11.259421] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:33.451 [2024-07-24 09:49:11.259443] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:33.451 [2024-07-24 09:49:11.259473] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:33.451 [2024-07-24 09:49:11.259490] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:33.451 [2024-07-24 09:49:11.259507] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:33.451 [2024-07-24 09:49:11.259529] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:33.451 [2024-07-24 09:49:11.259549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.451 [2024-07-24 09:49:11.259571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:33.451 [2024-07-24 09:49:11.259600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:23:33.451 [2024-07-24 09:49:11.259618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.451 [2024-07-24 09:49:11.259723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.451 [2024-07-24 09:49:11.259765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:33.451 [2024-07-24 09:49:11.259784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:33.451 [2024-07-24 09:49:11.259801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.451 [2024-07-24 09:49:11.259960] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:33.451 [2024-07-24 09:49:11.259992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:33.451 [2024-07-24 09:49:11.260016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:33.451 [2024-07-24 09:49:11.260079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:33.451 [2024-07-24 09:49:11.260130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.451 [2024-07-24 09:49:11.260176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:33.451 [2024-07-24 09:49:11.260193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:33.451 [2024-07-24 09:49:11.260223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.451 [2024-07-24 09:49:11.260240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:33.451 [2024-07-24 09:49:11.260257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:33.451 [2024-07-24 09:49:11.260274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:33.451 [2024-07-24 09:49:11.260307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:33.451 [2024-07-24 09:49:11.260358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:33.451 [2024-07-24 09:49:11.260412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:33.451 [2024-07-24 09:49:11.260462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:33.451 [2024-07-24 09:49:11.260512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.451 [2024-07-24 09:49:11.260544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:33.451 [2024-07-24 09:49:11.260570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:33.451 [2024-07-24 09:49:11.260587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.451 [2024-07-24 09:49:11.260601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:33.451 [2024-07-24 09:49:11.260617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:33.451 [2024-07-24 09:49:11.260632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.451 [2024-07-24 09:49:11.260654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:33.452 [2024-07-24 09:49:11.260671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:33.452 [2024-07-24 09:49:11.260688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.452 [2024-07-24 09:49:11.260702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:33.452 [2024-07-24 09:49:11.260718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:33.452 [2024-07-24 09:49:11.260733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.452 [2024-07-24 09:49:11.260748] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:33.452 [2024-07-24 09:49:11.260765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:33.452 [2024-07-24 09:49:11.260780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.452 [2024-07-24 09:49:11.260797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.452 [2024-07-24 09:49:11.260812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:33.452 [2024-07-24 09:49:11.260828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:33.452 [2024-07-24 09:49:11.260845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:33.452 [2024-07-24 09:49:11.260861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:33.452 [2024-07-24 09:49:11.260875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:33.452 [2024-07-24 09:49:11.260891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:33.452 [2024-07-24 09:49:11.260926] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:33.452 [2024-07-24 09:49:11.260974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.260996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:33.452 [2024-07-24 09:49:11.261016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:33.452 [2024-07-24 09:49:11.261036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:33.452 [2024-07-24 09:49:11.261056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:33.452 [2024-07-24 09:49:11.261075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:33.452 [2024-07-24 09:49:11.261093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:33.452 [2024-07-24 09:49:11.261115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:33.452 [2024-07-24 09:49:11.261135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:33.452 [2024-07-24 09:49:11.261155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:33.452 [2024-07-24 09:49:11.261175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:33.452 [2024-07-24 09:49:11.261293] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:33.452 [2024-07-24 09:49:11.261316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:33.452 [2024-07-24 09:49:11.261358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:33.452 [2024-07-24 09:49:11.261378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:33.452 [2024-07-24 09:49:11.261398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:33.452 [2024-07-24 09:49:11.261420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.452 [2024-07-24 09:49:11.261445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:33.452 [2024-07-24 09:49:11.261464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:23:33.452 [2024-07-24 09:49:11.261483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.282793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.282855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.716 [2024-07-24 09:49:11.282885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.234 ms 00:23:33.716 [2024-07-24 09:49:11.282905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.283049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.283092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:33.716 [2024-07-24 09:49:11.283133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:33.716 [2024-07-24 09:49:11.283155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.294068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.294117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.716 [2024-07-24 09:49:11.294152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.774 ms 00:23:33.716 [2024-07-24 09:49:11.294181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.294303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.294329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.716 [2024-07-24 09:49:11.294348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.716 [2024-07-24 09:49:11.294364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.294919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.294950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.716 [2024-07-24 09:49:11.294970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:23:33.716 [2024-07-24 09:49:11.294985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.295176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.295234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.716 [2024-07-24 09:49:11.295271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:33.716 [2024-07-24 09:49:11.295288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.301415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.301458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.716 [2024-07-24 09:49:11.301487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.079 ms 00:23:33.716 [2024-07-24 09:49:11.301502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.304375] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:33.716 [2024-07-24 09:49:11.304421] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:33.716 [2024-07-24 09:49:11.304445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.304461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:33.716 [2024-07-24 09:49:11.304477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:23:33.716 [2024-07-24 09:49:11.304493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.317407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.317452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:33.716 [2024-07-24 09:49:11.317476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.866 ms 00:23:33.716 [2024-07-24 09:49:11.317506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.319649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.319686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:33.716 [2024-07-24 09:49:11.319707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:23:33.716 [2024-07-24 09:49:11.319721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.716 [2024-07-24 09:49:11.321411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.716 [2024-07-24 09:49:11.321446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:33.717 [2024-07-24 09:49:11.321465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:23:33.717 [2024-07-24 09:49:11.321479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.321871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.321905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:33.717 [2024-07-24 09:49:11.321942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:33.717 [2024-07-24 09:49:11.321957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.342983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.343063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:33.717 [2024-07-24 09:49:11.343089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.005 ms 00:23:33.717 [2024-07-24 09:49:11.343116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.349681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:33.717 [2024-07-24 09:49:11.352714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.352751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:33.717 [2024-07-24 09:49:11.352773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:23:33.717 [2024-07-24 09:49:11.352789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.352951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.352979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:33.717 [2024-07-24 09:49:11.352999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:33.717 [2024-07-24 09:49:11.353020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.354826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.354959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:33.717 [2024-07-24 09:49:11.355060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:23:33.717 [2024-07-24 09:49:11.355117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.355212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.355270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:33.717 [2024-07-24 09:49:11.355394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:33.717 [2024-07-24 09:49:11.355463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.355584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:33.717 [2024-07-24 09:49:11.355616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.355634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:33.717 [2024-07-24 09:49:11.355651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:33.717 [2024-07-24 09:49:11.355666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.359509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.359638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:33.717 [2024-07-24 09:49:11.359757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:23:33.717 [2024-07-24 09:49:11.359817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.360075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.717 [2024-07-24 09:49:11.360137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:33.717 [2024-07-24 09:49:11.360198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:33.717 [2024-07-24 09:49:11.360261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.717 [2024-07-24 09:49:11.366649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.250 ms, result 0 00:24:05.487  Copying: 1196/1048576 [kB] (1196 kBps) Copying: 8848/1048576 [kB] (7652 kBps) Copying: 42/1024 [MB] (33 MBps) Copying: 76/1024 [MB] (33 MBps) Copying: 110/1024 [MB] (34 MBps) Copying: 144/1024 [MB] (33 MBps) Copying: 177/1024 [MB] (33 MBps) Copying: 211/1024 [MB] (33 MBps) Copying: 245/1024 [MB] (33 MBps) Copying: 279/1024 [MB] (33 MBps) Copying: 312/1024 [MB] (33 MBps) Copying: 346/1024 [MB] (33 MBps) Copying: 379/1024 [MB] (33 MBps) Copying: 412/1024 [MB] (32 MBps) Copying: 447/1024 [MB] (34 MBps) Copying: 482/1024 [MB] (35 MBps) Copying: 518/1024 [MB] (36 MBps) Copying: 555/1024 [MB] (36 MBps) Copying: 592/1024 [MB] (37 MBps) Copying: 631/1024 [MB] (39 MBps) Copying: 669/1024 [MB] (38 MBps) Copying: 708/1024 [MB] (38 MBps) Copying: 746/1024 [MB] (38 MBps) Copying: 784/1024 [MB] (38 MBps) Copying: 822/1024 [MB] (37 MBps) Copying: 861/1024 [MB] (38 MBps) Copying: 898/1024 [MB] (37 MBps) Copying: 937/1024 [MB] (38 MBps) Copying: 975/1024 [MB] (38 MBps) Copying: 1012/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-07-24 09:49:43.114702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.114814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.487 [2024-07-24 09:49:43.114849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:05.487 [2024-07-24 09:49:43.114885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.114935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.487 [2024-07-24 09:49:43.115821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.115860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.487 [2024-07-24 09:49:43.115886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:24:05.487 [2024-07-24 09:49:43.115909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.116350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.116397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.487 [2024-07-24 09:49:43.116422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:24:05.487 [2024-07-24 09:49:43.116453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.131372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.131428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.487 [2024-07-24 09:49:43.131444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.903 ms 00:24:05.487 [2024-07-24 09:49:43.131455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.137355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.137405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:05.487 [2024-07-24 09:49:43.137419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.871 ms 00:24:05.487 [2024-07-24 09:49:43.137430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.139317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.139354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.487 [2024-07-24 09:49:43.139366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:24:05.487 [2024-07-24 09:49:43.139376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.142846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.142887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.487 [2024-07-24 09:49:43.142900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:24:05.487 [2024-07-24 09:49:43.142910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.146356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.146394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:05.487 [2024-07-24 09:49:43.146419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:24:05.487 [2024-07-24 09:49:43.146429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.148713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.148748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:05.487 [2024-07-24 09:49:43.148760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:24:05.487 [2024-07-24 09:49:43.148769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.150267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.150299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:05.487 [2024-07-24 09:49:43.150310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:24:05.487 [2024-07-24 09:49:43.150320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.151338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.151369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:05.487 [2024-07-24 09:49:43.151394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:24:05.487 [2024-07-24 09:49:43.151404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.152514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.487 [2024-07-24 09:49:43.152547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:05.487 [2024-07-24 09:49:43.152558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:24:05.487 [2024-07-24 09:49:43.152567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.487 [2024-07-24 09:49:43.152594] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:05.487 [2024-07-24 09:49:43.152611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:05.487 [2024-07-24 09:49:43.152624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:24:05.487 [2024-07-24 09:49:43.152635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:05.487 [2024-07-24 09:49:43.152646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:05.487 [2024-07-24 09:49:43.152656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:05.487 [2024-07-24 09:49:43.152667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:05.487 [2024-07-24 09:49:43.152677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.152876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:05.488 [2024-07-24 09:49:43.153793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:05.489 [2024-07-24 09:49:43.153901] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:05.489 [2024-07-24 09:49:43.153911] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 531d19f2-822e-4a37-8b81-722e0f888ffa 00:24:05.489 [2024-07-24 09:49:43.153922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:24:05.489 [2024-07-24 09:49:43.153933] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161984 00:24:05.489 [2024-07-24 09:49:43.153943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160000 00:24:05.489 [2024-07-24 09:49:43.153954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:24:05.489 [2024-07-24 09:49:43.153963] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:05.489 [2024-07-24 09:49:43.153974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:05.489 [2024-07-24 09:49:43.154008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:05.489 [2024-07-24 09:49:43.154017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:05.489 [2024-07-24 09:49:43.154026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:05.489 [2024-07-24 09:49:43.154035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.489 [2024-07-24 09:49:43.154049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:05.489 [2024-07-24 09:49:43.154060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:24:05.489 [2024-07-24 09:49:43.154070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.155879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.489 [2024-07-24 09:49:43.155900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.489 [2024-07-24 09:49:43.155916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:24:05.489 [2024-07-24 09:49:43.155926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.156033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.489 [2024-07-24 09:49:43.156044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.489 [2024-07-24 09:49:43.156064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:05.489 [2024-07-24 09:49:43.156074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.162157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.162191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.489 [2024-07-24 09:49:43.162210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.162225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.162283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.162295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.489 [2024-07-24 09:49:43.162305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.162315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.162384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.162397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.489 [2024-07-24 09:49:43.162407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.162417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.162438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.162448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.489 [2024-07-24 09:49:43.162458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.162475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.176123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.176177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.489 [2024-07-24 09:49:43.176206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.176217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.184570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.489 [2024-07-24 09:49:43.184583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.184593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.184673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.489 [2024-07-24 09:49:43.184684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.184694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.184737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.489 [2024-07-24 09:49:43.184747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.184765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.184862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.489 [2024-07-24 09:49:43.184873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.184882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.184935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.489 [2024-07-24 09:49:43.184950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.184959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.184997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.185008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.489 [2024-07-24 09:49:43.185018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.185028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.185069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.489 [2024-07-24 09:49:43.185084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.489 [2024-07-24 09:49:43.185094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.489 [2024-07-24 09:49:43.185111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.489 [2024-07-24 09:49:43.185261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.645 ms, result 0 00:24:05.749 00:24:05.749 00:24:05.749 09:49:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:07.652 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:07.652 09:49:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:07.652 [2024-07-24 09:49:45.244751] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:24:07.652 [2024-07-24 09:49:45.244890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94088 ] 00:24:07.652 [2024-07-24 09:49:45.412740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.652 [2024-07-24 09:49:45.460238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.911 [2024-07-24 09:49:45.563333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.911 [2024-07-24 09:49:45.563404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.911 [2024-07-24 09:49:45.722410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.911 [2024-07-24 09:49:45.722472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.911 [2024-07-24 09:49:45.722488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.911 [2024-07-24 09:49:45.722498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.911 [2024-07-24 09:49:45.722552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.911 [2024-07-24 09:49:45.722568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.911 [2024-07-24 09:49:45.722579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:07.911 [2024-07-24 09:49:45.722589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.911 [2024-07-24 09:49:45.722610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.911 [2024-07-24 09:49:45.722841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.911 [2024-07-24 09:49:45.722859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.911 [2024-07-24 09:49:45.722870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.911 [2024-07-24 09:49:45.722881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:24:07.911 [2024-07-24 09:49:45.722890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.911 [2024-07-24 09:49:45.724307] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.911 [2024-07-24 09:49:45.726732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.911 [2024-07-24 09:49:45.726783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.911 [2024-07-24 09:49:45.726803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:24:07.911 [2024-07-24 09:49:45.726813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.911 [2024-07-24 09:49:45.726876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.911 [2024-07-24 09:49:45.726888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.911 [2024-07-24 09:49:45.726899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:07.911 [2024-07-24 09:49:45.726909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.733616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.733645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.171 [2024-07-24 09:49:45.733666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.652 ms 00:24:08.171 [2024-07-24 09:49:45.733682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.733778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.733790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.171 [2024-07-24 09:49:45.733809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:08.171 [2024-07-24 09:49:45.733819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.733871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.733886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.171 [2024-07-24 09:49:45.733896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:08.171 [2024-07-24 09:49:45.733913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.733939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.171 [2024-07-24 09:49:45.735566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.735594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.171 [2024-07-24 09:49:45.735605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:24:08.171 [2024-07-24 09:49:45.735615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.735651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.735662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.171 [2024-07-24 09:49:45.735672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:08.171 [2024-07-24 09:49:45.735685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.735708] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.171 [2024-07-24 09:49:45.735731] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.171 [2024-07-24 09:49:45.735788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.171 [2024-07-24 09:49:45.735814] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:08.171 [2024-07-24 09:49:45.735898] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.171 [2024-07-24 09:49:45.735912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.171 [2024-07-24 09:49:45.735924] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:08.171 [2024-07-24 09:49:45.735945] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.171 [2024-07-24 09:49:45.735957] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.171 [2024-07-24 09:49:45.735968] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.171 [2024-07-24 09:49:45.735977] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.171 [2024-07-24 09:49:45.735987] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.171 [2024-07-24 09:49:45.735997] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.171 [2024-07-24 09:49:45.736007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.736021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.171 [2024-07-24 09:49:45.736034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:24:08.171 [2024-07-24 09:49:45.736044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.736112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.171 [2024-07-24 09:49:45.736130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.171 [2024-07-24 09:49:45.736140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:08.171 [2024-07-24 09:49:45.736149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.171 [2024-07-24 09:49:45.736253] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.171 [2024-07-24 09:49:45.736268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.171 [2024-07-24 09:49:45.736288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.171 [2024-07-24 09:49:45.736299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.171 [2024-07-24 09:49:45.736309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.171 [2024-07-24 09:49:45.736318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.171 [2024-07-24 09:49:45.736328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.172 [2024-07-24 09:49:45.736348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.172 [2024-07-24 09:49:45.736379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.172 [2024-07-24 09:49:45.736389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.172 [2024-07-24 09:49:45.736398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.172 [2024-07-24 09:49:45.736407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.172 [2024-07-24 09:49:45.736417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.172 [2024-07-24 09:49:45.736426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.172 [2024-07-24 09:49:45.736445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.172 [2024-07-24 09:49:45.736473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.172 [2024-07-24 09:49:45.736501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.172 [2024-07-24 09:49:45.736534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.172 [2024-07-24 09:49:45.736560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.172 [2024-07-24 09:49:45.736587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.172 [2024-07-24 09:49:45.736605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.172 [2024-07-24 09:49:45.736614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.172 [2024-07-24 09:49:45.736622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.172 [2024-07-24 09:49:45.736631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.172 [2024-07-24 09:49:45.736640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.172 [2024-07-24 09:49:45.736649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.172 [2024-07-24 09:49:45.736670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.172 [2024-07-24 09:49:45.736680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736689] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.172 [2024-07-24 09:49:45.736699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.172 [2024-07-24 09:49:45.736708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.172 [2024-07-24 09:49:45.736727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.172 [2024-07-24 09:49:45.736736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.172 [2024-07-24 09:49:45.736745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.172 [2024-07-24 09:49:45.736754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.172 [2024-07-24 09:49:45.736764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.172 [2024-07-24 09:49:45.736773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.172 [2024-07-24 09:49:45.736783] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.172 [2024-07-24 09:49:45.736795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.172 [2024-07-24 09:49:45.736817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.172 [2024-07-24 09:49:45.736830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.172 [2024-07-24 09:49:45.736841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.172 [2024-07-24 09:49:45.736851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.172 [2024-07-24 09:49:45.736861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.172 [2024-07-24 09:49:45.736872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.172 [2024-07-24 09:49:45.736882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.172 [2024-07-24 09:49:45.736892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.172 [2024-07-24 09:49:45.736902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.172 [2024-07-24 09:49:45.736963] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.172 [2024-07-24 09:49:45.736973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.736992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.172 [2024-07-24 09:49:45.737002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.172 [2024-07-24 09:49:45.737016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.173 [2024-07-24 09:49:45.737026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.173 [2024-07-24 09:49:45.737037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.737050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.173 [2024-07-24 09:49:45.737060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:24:08.173 [2024-07-24 09:49:45.737069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.761741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.761924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.173 [2024-07-24 09:49:45.762069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.643 ms 00:24:08.173 [2024-07-24 09:49:45.762119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.762275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.762418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.173 [2024-07-24 09:49:45.762515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:08.173 [2024-07-24 09:49:45.762554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.775345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.775512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.173 [2024-07-24 09:49:45.775596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:24:08.173 [2024-07-24 09:49:45.775635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.775711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.775754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.173 [2024-07-24 09:49:45.775788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:08.173 [2024-07-24 09:49:45.775825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.776469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.776587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.173 [2024-07-24 09:49:45.776664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:24:08.173 [2024-07-24 09:49:45.776702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.776856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.776895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.173 [2024-07-24 09:49:45.777030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:08.173 [2024-07-24 09:49:45.777070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.783218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.783356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.173 [2024-07-24 09:49:45.783486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.107 ms 00:24:08.173 [2024-07-24 09:49:45.783504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.786167] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:08.173 [2024-07-24 09:49:45.786215] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.173 [2024-07-24 09:49:45.786231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.786241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.173 [2024-07-24 09:49:45.786252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:24:08.173 [2024-07-24 09:49:45.786271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.798762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.798816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.173 [2024-07-24 09:49:45.798830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.474 ms 00:24:08.173 [2024-07-24 09:49:45.798840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.800680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.800711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.173 [2024-07-24 09:49:45.800723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:24:08.173 [2024-07-24 09:49:45.800733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.802332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.802363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.173 [2024-07-24 09:49:45.802375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:24:08.173 [2024-07-24 09:49:45.802393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.802675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.802691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.173 [2024-07-24 09:49:45.802705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:24:08.173 [2024-07-24 09:49:45.802724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.823328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.823396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.173 [2024-07-24 09:49:45.823413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.616 ms 00:24:08.173 [2024-07-24 09:49:45.823423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.829683] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.173 [2024-07-24 09:49:45.832314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.832349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.173 [2024-07-24 09:49:45.832372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.858 ms 00:24:08.173 [2024-07-24 09:49:45.832382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.832436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.832451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.173 [2024-07-24 09:49:45.832461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.173 [2024-07-24 09:49:45.832474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.833496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.833608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.173 [2024-07-24 09:49:45.833714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:24:08.173 [2024-07-24 09:49:45.833751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.173 [2024-07-24 09:49:45.833798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.173 [2024-07-24 09:49:45.833830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.173 [2024-07-24 09:49:45.833860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.173 [2024-07-24 09:49:45.833943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.174 [2024-07-24 09:49:45.834015] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.174 [2024-07-24 09:49:45.834057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.174 [2024-07-24 09:49:45.834086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.174 [2024-07-24 09:49:45.834117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:08.174 [2024-07-24 09:49:45.834184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.174 [2024-07-24 09:49:45.837851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.174 [2024-07-24 09:49:45.837984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.174 [2024-07-24 09:49:45.838073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:24:08.174 [2024-07-24 09:49:45.838108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.174 [2024-07-24 09:49:45.838202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.174 [2024-07-24 09:49:45.838297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.174 [2024-07-24 09:49:45.838393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:08.174 [2024-07-24 09:49:45.838405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.174 [2024-07-24 09:49:45.839422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.773 ms, result 0 00:24:41.876  Copying: 30/1024 [MB] (30 MBps) Copying: 58/1024 [MB] (27 MBps) Copying: 88/1024 [MB] (30 MBps) Copying: 122/1024 [MB] (33 MBps) Copying: 156/1024 [MB] (34 MBps) Copying: 188/1024 [MB] (31 MBps) Copying: 218/1024 [MB] (30 MBps) Copying: 250/1024 [MB] (31 MBps) Copying: 286/1024 [MB] (36 MBps) Copying: 319/1024 [MB] (33 MBps) Copying: 349/1024 [MB] (30 MBps) Copying: 379/1024 [MB] (29 MBps) Copying: 412/1024 [MB] (33 MBps) Copying: 446/1024 [MB] (33 MBps) Copying: 476/1024 [MB] (30 MBps) Copying: 506/1024 [MB] (29 MBps) Copying: 537/1024 [MB] (30 MBps) Copying: 571/1024 [MB] (34 MBps) Copying: 604/1024 [MB] (33 MBps) Copying: 634/1024 [MB] (29 MBps) Copying: 662/1024 [MB] (28 MBps) Copying: 690/1024 [MB] (27 MBps) Copying: 719/1024 [MB] (28 MBps) Copying: 748/1024 [MB] (29 MBps) Copying: 780/1024 [MB] (31 MBps) Copying: 808/1024 [MB] (28 MBps) Copying: 837/1024 [MB] (28 MBps) Copying: 866/1024 [MB] (28 MBps) Copying: 895/1024 [MB] (29 MBps) Copying: 925/1024 [MB] (30 MBps) Copying: 954/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (29 MBps) Copying: 1010/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-24 09:50:19.552769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.552881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:41.876 [2024-07-24 09:50:19.552933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:41.876 [2024-07-24 09:50:19.552959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.553008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:41.876 [2024-07-24 09:50:19.554347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.554402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:41.876 [2024-07-24 09:50:19.554431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:24:41.876 [2024-07-24 09:50:19.554455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.554898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.554925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:41.876 [2024-07-24 09:50:19.554950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:24:41.876 [2024-07-24 09:50:19.554972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.559870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.560548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:41.876 [2024-07-24 09:50:19.560568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:24:41.876 [2024-07-24 09:50:19.560583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.568116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.568167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:41.876 [2024-07-24 09:50:19.568180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:24:41.876 [2024-07-24 09:50:19.568221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.569865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.569907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:41.876 [2024-07-24 09:50:19.569921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:24:41.876 [2024-07-24 09:50:19.569931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.573681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.573722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:41.876 [2024-07-24 09:50:19.573751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:24:41.876 [2024-07-24 09:50:19.573761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.577058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.577105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:41.876 [2024-07-24 09:50:19.577119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:24:41.876 [2024-07-24 09:50:19.577129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.579292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.579325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:41.876 [2024-07-24 09:50:19.579337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:24:41.876 [2024-07-24 09:50:19.579346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.580925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.580962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:41.876 [2024-07-24 09:50:19.580974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:24:41.876 [2024-07-24 09:50:19.580983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.582292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.582338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.876 [2024-07-24 09:50:19.582350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:24:41.876 [2024-07-24 09:50:19.582359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.583458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.876 [2024-07-24 09:50:19.583491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.876 [2024-07-24 09:50:19.583503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:24:41.876 [2024-07-24 09:50:19.583512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.876 [2024-07-24 09:50:19.583538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.876 [2024-07-24 09:50:19.583555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:41.876 [2024-07-24 09:50:19.583568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:24:41.876 [2024-07-24 09:50:19.583579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.583990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.584869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.876 [2024-07-24 09:50:19.585474] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.876 [2024-07-24 09:50:19.585485] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 531d19f2-822e-4a37-8b81-722e0f888ffa 00:24:41.876 [2024-07-24 09:50:19.585495] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:24:41.876 [2024-07-24 09:50:19.585505] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:41.876 [2024-07-24 09:50:19.585515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:41.876 [2024-07-24 09:50:19.585529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:41.876 [2024-07-24 09:50:19.585538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.877 [2024-07-24 09:50:19.585548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.877 [2024-07-24 09:50:19.585557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.877 [2024-07-24 09:50:19.585567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.877 [2024-07-24 09:50:19.585576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.877 [2024-07-24 09:50:19.585587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.877 [2024-07-24 09:50:19.585596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.877 [2024-07-24 09:50:19.585607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:24:41.877 [2024-07-24 09:50:19.585617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.587336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.877 [2024-07-24 09:50:19.587365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.877 [2024-07-24 09:50:19.587380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:24:41.877 [2024-07-24 09:50:19.587390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.587493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.877 [2024-07-24 09:50:19.587513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.877 [2024-07-24 09:50:19.587528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:41.877 [2024-07-24 09:50:19.587537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.593711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.593829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.877 [2024-07-24 09:50:19.593897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.593931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.593999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.594031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.877 [2024-07-24 09:50:19.594061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.594090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.594178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.594306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.877 [2024-07-24 09:50:19.594361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.594390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.594429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.594469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.877 [2024-07-24 09:50:19.594498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.594527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.605935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.606127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.877 [2024-07-24 09:50:19.606277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.606326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.614553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.614710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.877 [2024-07-24 09:50:19.614729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.614740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.614790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.614801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.877 [2024-07-24 09:50:19.614819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.614828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.614853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.614864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.877 [2024-07-24 09:50:19.614875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.614884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.614966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.614978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.877 [2024-07-24 09:50:19.614989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.615003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.615035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.615054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.877 [2024-07-24 09:50:19.615065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.615075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.615110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.615122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.877 [2024-07-24 09:50:19.615132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.615145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.615186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.877 [2024-07-24 09:50:19.615400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.877 [2024-07-24 09:50:19.615433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.877 [2024-07-24 09:50:19.615462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.877 [2024-07-24 09:50:19.615624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.938 ms, result 0 00:24:42.136 00:24:42.136 00:24:42.136 09:50:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:44.048 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:44.048 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:44.307 Process with pid 92507 is not found 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92507 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92507 ']' 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 92507 00:24:44.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92507) - No such process 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 92507 is not found' 00:24:44.307 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:44.566 Remove shared memory files 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:44.566 ************************************ 00:24:44.566 END TEST ftl_dirty_shutdown 00:24:44.566 ************************************ 00:24:44.566 00:24:44.566 real 3m5.777s 00:24:44.566 user 3m30.288s 00:24:44.566 sys 0m34.263s 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:44.566 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:44.566 09:50:22 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:44.566 09:50:22 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:44.566 09:50:22 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:44.566 09:50:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:44.566 ************************************ 00:24:44.566 START TEST ftl_upgrade_shutdown 00:24:44.566 ************************************ 00:24:44.566 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:44.566 * Looking for test storage... 00:24:44.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.566 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:44.566 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:44.824 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94527 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94527 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 94527 ']' 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.825 09:50:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:44.825 [2024-07-24 09:50:22.518326] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:24:44.825 [2024-07-24 09:50:22.518659] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94527 ] 00:24:45.083 [2024-07-24 09:50:22.679455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.083 [2024-07-24 09:50:22.720437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:45.651 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:45.910 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:46.169 { 00:24:46.169 "name": "basen1", 00:24:46.169 "aliases": [ 00:24:46.169 "0c7aa64b-b472-410c-a295-6fce8a96ca4b" 00:24:46.169 ], 00:24:46.169 "product_name": "NVMe disk", 00:24:46.169 "block_size": 4096, 00:24:46.169 "num_blocks": 1310720, 00:24:46.169 "uuid": "0c7aa64b-b472-410c-a295-6fce8a96ca4b", 00:24:46.169 "assigned_rate_limits": { 00:24:46.169 "rw_ios_per_sec": 0, 00:24:46.169 "rw_mbytes_per_sec": 0, 00:24:46.169 "r_mbytes_per_sec": 0, 00:24:46.169 "w_mbytes_per_sec": 0 00:24:46.169 }, 00:24:46.169 "claimed": true, 00:24:46.169 "claim_type": "read_many_write_one", 00:24:46.169 "zoned": false, 00:24:46.169 "supported_io_types": { 00:24:46.169 "read": true, 00:24:46.169 "write": true, 00:24:46.169 "unmap": true, 00:24:46.169 "flush": true, 00:24:46.169 "reset": true, 00:24:46.169 "nvme_admin": true, 00:24:46.169 "nvme_io": true, 00:24:46.169 "nvme_io_md": false, 00:24:46.169 "write_zeroes": true, 00:24:46.169 "zcopy": false, 00:24:46.169 "get_zone_info": false, 00:24:46.169 "zone_management": false, 00:24:46.169 "zone_append": false, 00:24:46.169 "compare": true, 00:24:46.169 "compare_and_write": false, 00:24:46.169 "abort": true, 00:24:46.169 "seek_hole": false, 00:24:46.169 "seek_data": false, 00:24:46.169 "copy": true, 00:24:46.169 "nvme_iov_md": false 00:24:46.169 }, 00:24:46.169 "driver_specific": { 00:24:46.169 "nvme": [ 00:24:46.169 { 00:24:46.169 "pci_address": "0000:00:11.0", 00:24:46.169 "trid": { 00:24:46.169 "trtype": "PCIe", 00:24:46.169 "traddr": "0000:00:11.0" 00:24:46.169 }, 00:24:46.169 "ctrlr_data": { 00:24:46.169 "cntlid": 0, 00:24:46.169 "vendor_id": "0x1b36", 00:24:46.169 "model_number": "QEMU NVMe Ctrl", 00:24:46.169 "serial_number": "12341", 00:24:46.169 "firmware_revision": "8.0.0", 00:24:46.169 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:46.169 "oacs": { 00:24:46.169 "security": 0, 00:24:46.169 "format": 1, 00:24:46.169 "firmware": 0, 00:24:46.169 "ns_manage": 1 00:24:46.169 }, 00:24:46.169 "multi_ctrlr": false, 00:24:46.169 "ana_reporting": false 00:24:46.169 }, 00:24:46.169 "vs": { 00:24:46.169 "nvme_version": "1.4" 00:24:46.169 }, 00:24:46.169 "ns_data": { 00:24:46.169 "id": 1, 00:24:46.169 "can_share": false 00:24:46.169 } 00:24:46.169 } 00:24:46.169 ], 00:24:46.169 "mp_policy": "active_passive" 00:24:46.169 } 00:24:46.169 } 00:24:46.169 ]' 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:46.169 09:50:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:46.428 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7893ff48-bc3f-4946-b200-19833def9d80 00:24:46.428 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:46.428 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7893ff48-bc3f-4946-b200-19833def9d80 00:24:46.687 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:46.687 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=fb3ec3fc-dedf-42a8-b546-5c8add84dacc 00:24:46.687 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u fb3ec3fc-dedf-42a8-b546-5c8add84dacc 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c ]] 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 5120 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:46.945 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c 00:24:47.227 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:47.227 { 00:24:47.227 "name": "6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c", 00:24:47.227 "aliases": [ 00:24:47.227 "lvs/basen1p0" 00:24:47.227 ], 00:24:47.227 "product_name": "Logical Volume", 00:24:47.227 "block_size": 4096, 00:24:47.227 "num_blocks": 5242880, 00:24:47.227 "uuid": "6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c", 00:24:47.227 "assigned_rate_limits": { 00:24:47.227 "rw_ios_per_sec": 0, 00:24:47.227 "rw_mbytes_per_sec": 0, 00:24:47.227 "r_mbytes_per_sec": 0, 00:24:47.227 "w_mbytes_per_sec": 0 00:24:47.227 }, 00:24:47.227 "claimed": false, 00:24:47.227 "zoned": false, 00:24:47.227 "supported_io_types": { 00:24:47.227 "read": true, 00:24:47.227 "write": true, 00:24:47.227 "unmap": true, 00:24:47.227 "flush": false, 00:24:47.227 "reset": true, 00:24:47.227 "nvme_admin": false, 00:24:47.227 "nvme_io": false, 00:24:47.227 "nvme_io_md": false, 00:24:47.227 "write_zeroes": true, 00:24:47.227 "zcopy": false, 00:24:47.227 "get_zone_info": false, 00:24:47.227 "zone_management": false, 00:24:47.227 "zone_append": false, 00:24:47.227 "compare": false, 00:24:47.227 "compare_and_write": false, 00:24:47.227 "abort": false, 00:24:47.227 "seek_hole": true, 00:24:47.227 "seek_data": true, 00:24:47.227 "copy": false, 00:24:47.227 "nvme_iov_md": false 00:24:47.227 }, 00:24:47.227 "driver_specific": { 00:24:47.227 "lvol": { 00:24:47.227 "lvol_store_uuid": "fb3ec3fc-dedf-42a8-b546-5c8add84dacc", 00:24:47.227 "base_bdev": "basen1", 00:24:47.227 "thin_provision": true, 00:24:47.227 "num_allocated_clusters": 0, 00:24:47.227 "snapshot": false, 00:24:47.227 "clone": false, 00:24:47.227 "esnap_clone": false 00:24:47.227 } 00:24:47.227 } 00:24:47.227 } 00:24:47.227 ]' 00:24:47.227 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:47.227 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:47.227 09:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:47.504 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:47.761 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:47.761 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:47.761 09:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6e01e03c-2fd5-4ab2-bdc1-49c0677c9e3c -c cachen1p0 --l2p_dram_limit 2 00:24:48.020 [2024-07-24 09:50:25.641176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.641255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:48.020 [2024-07-24 09:50:25.641275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:48.020 [2024-07-24 09:50:25.641307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.641371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.641399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:48.020 [2024-07-24 09:50:25.641412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:24:48.020 [2024-07-24 09:50:25.641423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.641452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:48.020 [2024-07-24 09:50:25.641773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:48.020 [2024-07-24 09:50:25.641803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.641813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:48.020 [2024-07-24 09:50:25.641827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:24:48.020 [2024-07-24 09:50:25.641836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.641914] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID fd2e273c-18e7-4f40-ac1a-ad70a9b87133 00:24:48.020 [2024-07-24 09:50:25.643343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.643373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:48.020 [2024-07-24 09:50:25.643397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:48.020 [2024-07-24 09:50:25.643410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.650897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.650935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:48.020 [2024-07-24 09:50:25.650948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.448 ms 00:24:48.020 [2024-07-24 09:50:25.650963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.651009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.651027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:48.020 [2024-07-24 09:50:25.651037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:24:48.020 [2024-07-24 09:50:25.651057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.651121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.651137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:48.020 [2024-07-24 09:50:25.651147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:48.020 [2024-07-24 09:50:25.651159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.651184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:48.020 [2024-07-24 09:50:25.653023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.653053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:48.020 [2024-07-24 09:50:25.653067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.844 ms 00:24:48.020 [2024-07-24 09:50:25.653084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.653126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.653136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:48.020 [2024-07-24 09:50:25.653149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:48.020 [2024-07-24 09:50:25.653159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.653183] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:48.020 [2024-07-24 09:50:25.653327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:48.020 [2024-07-24 09:50:25.653346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:48.020 [2024-07-24 09:50:25.653359] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:48.020 [2024-07-24 09:50:25.653374] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:48.020 [2024-07-24 09:50:25.653386] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:48.020 [2024-07-24 09:50:25.653402] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:48.020 [2024-07-24 09:50:25.653411] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:48.020 [2024-07-24 09:50:25.653424] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:48.020 [2024-07-24 09:50:25.653434] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:48.020 [2024-07-24 09:50:25.653446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.653456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:48.020 [2024-07-24 09:50:25.653470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:24:48.020 [2024-07-24 09:50:25.653480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.653561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.020 [2024-07-24 09:50:25.653571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:48.020 [2024-07-24 09:50:25.653589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:24:48.020 [2024-07-24 09:50:25.653599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.020 [2024-07-24 09:50:25.653699] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:48.020 [2024-07-24 09:50:25.653719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:48.020 [2024-07-24 09:50:25.653731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:48.020 [2024-07-24 09:50:25.653747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.020 [2024-07-24 09:50:25.653759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:48.020 [2024-07-24 09:50:25.653768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:48.020 [2024-07-24 09:50:25.653781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:48.020 [2024-07-24 09:50:25.653790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:48.020 [2024-07-24 09:50:25.653801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:48.020 [2024-07-24 09:50:25.653810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.020 [2024-07-24 09:50:25.653822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:48.020 [2024-07-24 09:50:25.653831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:48.021 [2024-07-24 09:50:25.653842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.653851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:48.021 [2024-07-24 09:50:25.653865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:48.021 [2024-07-24 09:50:25.653874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.653887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:48.021 [2024-07-24 09:50:25.653896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:48.021 [2024-07-24 09:50:25.653907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.653916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:48.021 [2024-07-24 09:50:25.653927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:48.021 [2024-07-24 09:50:25.653936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:48.021 [2024-07-24 09:50:25.653947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:48.021 [2024-07-24 09:50:25.653956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:48.021 [2024-07-24 09:50:25.653967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:48.021 [2024-07-24 09:50:25.653977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:48.021 [2024-07-24 09:50:25.653988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:48.021 [2024-07-24 09:50:25.653997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:48.021 [2024-07-24 09:50:25.654008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:48.021 [2024-07-24 09:50:25.654017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:48.021 [2024-07-24 09:50:25.654032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:48.021 [2024-07-24 09:50:25.654042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:48.021 [2024-07-24 09:50:25.654053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:48.021 [2024-07-24 09:50:25.654062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:48.021 [2024-07-24 09:50:25.654083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:48.021 [2024-07-24 09:50:25.654094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:48.021 [2024-07-24 09:50:25.654115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:48.021 [2024-07-24 09:50:25.654143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:48.021 [2024-07-24 09:50:25.654154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654163] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:48.021 [2024-07-24 09:50:25.654175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:48.021 [2024-07-24 09:50:25.654184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:48.021 [2024-07-24 09:50:25.654211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:48.021 [2024-07-24 09:50:25.654224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:48.021 [2024-07-24 09:50:25.654236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:48.021 [2024-07-24 09:50:25.654245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:48.021 [2024-07-24 09:50:25.654256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:48.021 [2024-07-24 09:50:25.654265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:48.021 [2024-07-24 09:50:25.654277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:48.021 [2024-07-24 09:50:25.654292] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:48.021 [2024-07-24 09:50:25.654307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:48.021 [2024-07-24 09:50:25.654331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:48.021 [2024-07-24 09:50:25.654365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:48.021 [2024-07-24 09:50:25.654378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:48.021 [2024-07-24 09:50:25.654388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:48.021 [2024-07-24 09:50:25.654403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:48.021 [2024-07-24 09:50:25.654482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:48.021 [2024-07-24 09:50:25.654495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:48.021 [2024-07-24 09:50:25.654522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:48.021 [2024-07-24 09:50:25.654532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:48.021 [2024-07-24 09:50:25.654544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:48.021 [2024-07-24 09:50:25.654555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.021 [2024-07-24 09:50:25.654568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:48.021 [2024-07-24 09:50:25.654586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:24:48.021 [2024-07-24 09:50:25.654601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.021 [2024-07-24 09:50:25.654643] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:48.021 [2024-07-24 09:50:25.654665] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:51.343 [2024-07-24 09:50:28.937218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.937456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:51.343 [2024-07-24 09:50:28.937595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3287.901 ms 00:24:51.343 [2024-07-24 09:50:28.937638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.948543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.948757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:51.343 [2024-07-24 09:50:28.948878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.799 ms 00:24:51.343 [2024-07-24 09:50:28.948936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.949029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.949070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:51.343 [2024-07-24 09:50:28.949102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:51.343 [2024-07-24 09:50:28.949226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.959860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.960029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:51.343 [2024-07-24 09:50:28.960124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.559 ms 00:24:51.343 [2024-07-24 09:50:28.960164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.960238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.960279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:51.343 [2024-07-24 09:50:28.960321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:51.343 [2024-07-24 09:50:28.960361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.960933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.961056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:51.343 [2024-07-24 09:50:28.961131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:24:51.343 [2024-07-24 09:50:28.961173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.961269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.961420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:51.343 [2024-07-24 09:50:28.961434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:51.343 [2024-07-24 09:50:28.961446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.968731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.968908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:51.343 [2024-07-24 09:50:28.968989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.267 ms 00:24:51.343 [2024-07-24 09:50:28.969029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:28.976866] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:51.343 [2024-07-24 09:50:28.978018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:28.978135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:51.343 [2024-07-24 09:50:28.978227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.912 ms 00:24:51.343 [2024-07-24 09:50:28.978264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.003822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.004000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:51.343 [2024-07-24 09:50:29.004092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.542 ms 00:24:51.343 [2024-07-24 09:50:29.004134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.004357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.004474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:51.343 [2024-07-24 09:50:29.004523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:24:51.343 [2024-07-24 09:50:29.004557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.007733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.007861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:51.343 [2024-07-24 09:50:29.007958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.066 ms 00:24:51.343 [2024-07-24 09:50:29.007995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.010949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.011078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:51.343 [2024-07-24 09:50:29.011154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.896 ms 00:24:51.343 [2024-07-24 09:50:29.011204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.011507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.011559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:51.343 [2024-07-24 09:50:29.011646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.241 ms 00:24:51.343 [2024-07-24 09:50:29.011681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.343 [2024-07-24 09:50:29.051637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.343 [2024-07-24 09:50:29.051885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:51.343 [2024-07-24 09:50:29.051971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.962 ms 00:24:51.343 [2024-07-24 09:50:29.052007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.056488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.344 [2024-07-24 09:50:29.056640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:51.344 [2024-07-24 09:50:29.056668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.419 ms 00:24:51.344 [2024-07-24 09:50:29.056680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.059959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.344 [2024-07-24 09:50:29.059997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:24:51.344 [2024-07-24 09:50:29.060013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.240 ms 00:24:51.344 [2024-07-24 09:50:29.060023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.063745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.344 [2024-07-24 09:50:29.063783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:51.344 [2024-07-24 09:50:29.063798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.685 ms 00:24:51.344 [2024-07-24 09:50:29.063809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.063858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.344 [2024-07-24 09:50:29.063870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:51.344 [2024-07-24 09:50:29.063884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:51.344 [2024-07-24 09:50:29.063894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.063960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:51.344 [2024-07-24 09:50:29.063972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:51.344 [2024-07-24 09:50:29.063988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:24:51.344 [2024-07-24 09:50:29.064006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:51.344 [2024-07-24 09:50:29.065076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3429.046 ms, result 0 00:24:51.344 { 00:24:51.344 "name": "ftl", 00:24:51.344 "uuid": "fd2e273c-18e7-4f40-ac1a-ad70a9b87133" 00:24:51.344 } 00:24:51.344 09:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:51.603 [2024-07-24 09:50:29.265318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.603 09:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:51.861 09:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:51.861 [2024-07-24 09:50:29.645207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:51.861 09:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:52.120 [2024-07-24 09:50:29.837411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:52.120 09:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:52.379 Fill FTL, iteration 1 00:24:52.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94645 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94645 /var/tmp/spdk.tgt.sock 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 94645 ']' 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:52.379 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:52.638 [2024-07-24 09:50:30.310026] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:24:52.638 [2024-07-24 09:50:30.310473] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94645 ] 00:24:52.897 [2024-07-24 09:50:30.493075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.897 [2024-07-24 09:50:30.546891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.465 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.465 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:53.465 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:53.724 ftln1 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94645 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 94645 ']' 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 94645 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.724 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94645 00:24:53.982 killing process with pid 94645 00:24:53.982 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.982 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.982 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94645' 00:24:53.982 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 94645 00:24:53.982 09:50:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 94645 00:24:54.239 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:54.239 09:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:54.239 [2024-07-24 09:50:32.049653] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:24:54.239 [2024-07-24 09:50:32.049779] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94674 ] 00:24:54.510 [2024-07-24 09:50:32.219289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.510 [2024-07-24 09:50:32.264415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.000  Copying: 253/1024 [MB] (253 MBps) Copying: 507/1024 [MB] (254 MBps) Copying: 762/1024 [MB] (255 MBps) Copying: 1014/1024 [MB] (252 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:24:59.000 00:24:59.000 Calculate MD5 checksum, iteration 1 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:59.000 09:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:59.259 [2024-07-24 09:50:36.852752] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:24:59.259 [2024-07-24 09:50:36.853020] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94727 ] 00:24:59.259 [2024-07-24 09:50:37.019854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.259 [2024-07-24 09:50:37.065875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.456  Copying: 677/1024 [MB] (677 MBps) Copying: 1024/1024 [MB] (average 647 MBps) 00:25:01.456 00:25:01.456 09:50:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:01.456 09:50:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:03.427 Fill FTL, iteration 2 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b6d4af546d4dbcbc10a9d1c9843bc01f 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:03.427 09:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:03.427 [2024-07-24 09:50:40.928960] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:03.427 [2024-07-24 09:50:40.929357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94773 ] 00:25:03.427 [2024-07-24 09:50:41.102925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.427 [2024-07-24 09:50:41.148064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.882  Copying: 254/1024 [MB] (254 MBps) Copying: 509/1024 [MB] (255 MBps) Copying: 760/1024 [MB] (251 MBps) Copying: 1015/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:25:07.882 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:07.882 Calculate MD5 checksum, iteration 2 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:07.882 09:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:08.140 [2024-07-24 09:50:45.715799] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:08.140 [2024-07-24 09:50:45.715929] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94826 ] 00:25:08.140 [2024-07-24 09:50:45.882723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.140 [2024-07-24 09:50:45.929602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.031  Copying: 689/1024 [MB] (689 MBps) Copying: 1024/1024 [MB] (average 675 MBps) 00:25:11.031 00:25:11.031 09:50:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:11.031 09:50:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8ef9712543e795e02d025f9b7782b2ed 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:12.953 [2024-07-24 09:50:50.489456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.953 [2024-07-24 09:50:50.489513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:12.953 [2024-07-24 09:50:50.489537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:12.953 [2024-07-24 09:50:50.489549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.953 [2024-07-24 09:50:50.489583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.953 [2024-07-24 09:50:50.489596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:12.953 [2024-07-24 09:50:50.489607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:12.953 [2024-07-24 09:50:50.489618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.953 [2024-07-24 09:50:50.489639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.953 [2024-07-24 09:50:50.489651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:12.953 [2024-07-24 09:50:50.489674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:12.953 [2024-07-24 09:50:50.489689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.953 [2024-07-24 09:50:50.489756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.323 ms, result 0 00:25:12.953 true 00:25:12.953 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:12.953 { 00:25:12.953 "name": "ftl", 00:25:12.953 "properties": [ 00:25:12.953 { 00:25:12.953 "name": "superblock_version", 00:25:12.953 "value": 5, 00:25:12.953 "read-only": true 00:25:12.953 }, 00:25:12.953 { 00:25:12.953 "name": "base_device", 00:25:12.953 "bands": [ 00:25:12.953 { 00:25:12.953 "id": 0, 00:25:12.953 "state": "FREE", 00:25:12.953 "validity": 0.0 00:25:12.953 }, 00:25:12.953 { 00:25:12.953 "id": 1, 00:25:12.953 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 2, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 3, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 4, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 5, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 6, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 7, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 8, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 9, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 10, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 11, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 12, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 13, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 14, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 15, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 16, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 17, 00:25:12.954 "state": "FREE", 00:25:12.954 "validity": 0.0 00:25:12.954 } 00:25:12.954 ], 00:25:12.954 "read-only": true 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "name": "cache_device", 00:25:12.954 "type": "bdev", 00:25:12.954 "chunks": [ 00:25:12.954 { 00:25:12.954 "id": 0, 00:25:12.954 "state": "INACTIVE", 00:25:12.954 "utilization": 0.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 1, 00:25:12.954 "state": "CLOSED", 00:25:12.954 "utilization": 1.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 2, 00:25:12.954 "state": "CLOSED", 00:25:12.954 "utilization": 1.0 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 3, 00:25:12.954 "state": "OPEN", 00:25:12.954 "utilization": 0.001953125 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "id": 4, 00:25:12.954 "state": "OPEN", 00:25:12.954 "utilization": 0.0 00:25:12.954 } 00:25:12.954 ], 00:25:12.954 "read-only": true 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "name": "verbose_mode", 00:25:12.954 "value": true, 00:25:12.954 "unit": "", 00:25:12.954 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:12.954 }, 00:25:12.954 { 00:25:12.954 "name": "prep_upgrade_on_shutdown", 00:25:12.954 "value": false, 00:25:12.954 "unit": "", 00:25:12.954 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:12.954 } 00:25:12.954 ] 00:25:12.954 } 00:25:12.954 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:13.213 [2024-07-24 09:50:50.861444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.213 [2024-07-24 09:50:50.861514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:13.213 [2024-07-24 09:50:50.861541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:13.213 [2024-07-24 09:50:50.861559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.213 [2024-07-24 09:50:50.861595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.213 [2024-07-24 09:50:50.861606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:13.213 [2024-07-24 09:50:50.861617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:13.213 [2024-07-24 09:50:50.861632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.213 [2024-07-24 09:50:50.861668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.213 [2024-07-24 09:50:50.861687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:13.213 [2024-07-24 09:50:50.861704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:13.213 [2024-07-24 09:50:50.861722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.213 [2024-07-24 09:50:50.861796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.341 ms, result 0 00:25:13.213 true 00:25:13.213 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:13.213 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:13.213 09:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:13.472 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:13.472 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:13.472 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:13.472 [2024-07-24 09:50:51.273349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.472 [2024-07-24 09:50:51.273405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:13.472 [2024-07-24 09:50:51.273421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:13.472 [2024-07-24 09:50:51.273431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.472 [2024-07-24 09:50:51.273460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.472 [2024-07-24 09:50:51.273471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:13.472 [2024-07-24 09:50:51.273481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:13.472 [2024-07-24 09:50:51.273490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.472 [2024-07-24 09:50:51.273510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.472 [2024-07-24 09:50:51.273521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:13.472 [2024-07-24 09:50:51.273531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:13.472 [2024-07-24 09:50:51.273541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.472 [2024-07-24 09:50:51.273600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.245 ms, result 0 00:25:13.472 true 00:25:13.732 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:13.732 { 00:25:13.732 "name": "ftl", 00:25:13.732 "properties": [ 00:25:13.732 { 00:25:13.732 "name": "superblock_version", 00:25:13.732 "value": 5, 00:25:13.732 "read-only": true 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "name": "base_device", 00:25:13.732 "bands": [ 00:25:13.732 { 00:25:13.732 "id": 0, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 1, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 2, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 3, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 4, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 5, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 6, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 7, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 8, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 9, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 10, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 11, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 12, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 13, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 14, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 15, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 16, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 17, 00:25:13.732 "state": "FREE", 00:25:13.732 "validity": 0.0 00:25:13.732 } 00:25:13.732 ], 00:25:13.732 "read-only": true 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "name": "cache_device", 00:25:13.732 "type": "bdev", 00:25:13.732 "chunks": [ 00:25:13.732 { 00:25:13.732 "id": 0, 00:25:13.732 "state": "INACTIVE", 00:25:13.732 "utilization": 0.0 00:25:13.732 }, 00:25:13.732 { 00:25:13.732 "id": 1, 00:25:13.733 "state": "CLOSED", 00:25:13.733 "utilization": 1.0 00:25:13.733 }, 00:25:13.733 { 00:25:13.733 "id": 2, 00:25:13.733 "state": "CLOSED", 00:25:13.733 "utilization": 1.0 00:25:13.733 }, 00:25:13.733 { 00:25:13.733 "id": 3, 00:25:13.733 "state": "OPEN", 00:25:13.733 "utilization": 0.001953125 00:25:13.733 }, 00:25:13.733 { 00:25:13.733 "id": 4, 00:25:13.733 "state": "OPEN", 00:25:13.733 "utilization": 0.0 00:25:13.733 } 00:25:13.733 ], 00:25:13.733 "read-only": true 00:25:13.733 }, 00:25:13.733 { 00:25:13.733 "name": "verbose_mode", 00:25:13.733 "value": true, 00:25:13.733 "unit": "", 00:25:13.733 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:13.733 }, 00:25:13.733 { 00:25:13.733 "name": "prep_upgrade_on_shutdown", 00:25:13.733 "value": true, 00:25:13.733 "unit": "", 00:25:13.733 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:13.733 } 00:25:13.733 ] 00:25:13.733 } 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94527 ]] 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94527 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 94527 ']' 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 94527 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94527 00:25:13.733 killing process with pid 94527 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94527' 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 94527 00:25:13.733 09:50:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 94527 00:25:13.992 [2024-07-24 09:50:51.651183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:13.992 [2024-07-24 09:50:51.654693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.992 [2024-07-24 09:50:51.654842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:13.992 [2024-07-24 09:50:51.654923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:13.992 [2024-07-24 09:50:51.654959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.992 [2024-07-24 09:50:51.655032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:13.992 [2024-07-24 09:50:51.655750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.992 [2024-07-24 09:50:51.655850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:13.992 [2024-07-24 09:50:51.655930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:25:13.992 [2024-07-24 09:50:51.655963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.741912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.742115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:22.114 [2024-07-24 09:50:58.742225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7097.396 ms 00:25:22.114 [2024-07-24 09:50:58.742265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.743325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.743458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:22.114 [2024-07-24 09:50:58.743545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.013 ms 00:25:22.114 [2024-07-24 09:50:58.743592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.744553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.744676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:22.114 [2024-07-24 09:50:58.744696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:25:22.114 [2024-07-24 09:50:58.744707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.746375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.746405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:22.114 [2024-07-24 09:50:58.746417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.630 ms 00:25:22.114 [2024-07-24 09:50:58.746428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.748774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.748812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:22.114 [2024-07-24 09:50:58.748832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.320 ms 00:25:22.114 [2024-07-24 09:50:58.748843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.748915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.748927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:22.114 [2024-07-24 09:50:58.748937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:22.114 [2024-07-24 09:50:58.748947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.750132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.750165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:22.114 [2024-07-24 09:50:58.750177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.161 ms 00:25:22.114 [2024-07-24 09:50:58.750197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.751365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.751394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:22.114 [2024-07-24 09:50:58.751405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:25:22.114 [2024-07-24 09:50:58.751415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.114 [2024-07-24 09:50:58.752548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.114 [2024-07-24 09:50:58.752579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:22.114 [2024-07-24 09:50:58.752590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.106 ms 00:25:22.115 [2024-07-24 09:50:58.752615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.753539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.115 [2024-07-24 09:50:58.753572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:22.115 [2024-07-24 09:50:58.753584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.871 ms 00:25:22.115 [2024-07-24 09:50:58.753593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.753623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:22.115 [2024-07-24 09:50:58.753638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:22.115 [2024-07-24 09:50:58.753651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:22.115 [2024-07-24 09:50:58.753662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:22.115 [2024-07-24 09:50:58.753672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.115 [2024-07-24 09:50:58.753831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:22.115 [2024-07-24 09:50:58.753847] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fd2e273c-18e7-4f40-ac1a-ad70a9b87133 00:25:22.115 [2024-07-24 09:50:58.753857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:22.115 [2024-07-24 09:50:58.753867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:22.115 [2024-07-24 09:50:58.753876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:22.115 [2024-07-24 09:50:58.753886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:22.115 [2024-07-24 09:50:58.753895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:22.115 [2024-07-24 09:50:58.753905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:22.115 [2024-07-24 09:50:58.753915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:22.115 [2024-07-24 09:50:58.753923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:22.115 [2024-07-24 09:50:58.753932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:22.115 [2024-07-24 09:50:58.753942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.115 [2024-07-24 09:50:58.753953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:22.115 [2024-07-24 09:50:58.753963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:25:22.115 [2024-07-24 09:50:58.753972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.755732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.115 [2024-07-24 09:50:58.755759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:22.115 [2024-07-24 09:50:58.755770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.743 ms 00:25:22.115 [2024-07-24 09:50:58.755780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.755882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.115 [2024-07-24 09:50:58.755893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:22.115 [2024-07-24 09:50:58.755903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:25:22.115 [2024-07-24 09:50:58.755917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.762815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.762847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:22.115 [2024-07-24 09:50:58.762859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.762870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.762899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.762909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:22.115 [2024-07-24 09:50:58.762919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.762935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.762986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.762999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:22.115 [2024-07-24 09:50:58.763009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.763019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.763036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.763046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:22.115 [2024-07-24 09:50:58.763056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.763065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.775911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.775958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:22.115 [2024-07-24 09:50:58.775972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.775982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:22.115 [2024-07-24 09:50:58.784256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:22.115 [2024-07-24 09:50:58.784375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:22.115 [2024-07-24 09:50:58.784448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:22.115 [2024-07-24 09:50:58.784574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:22.115 [2024-07-24 09:50:58.784640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:22.115 [2024-07-24 09:50:58.784727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:22.115 [2024-07-24 09:50:58.784797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:22.115 [2024-07-24 09:50:58.784807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:22.115 [2024-07-24 09:50:58.784817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.115 [2024-07-24 09:50:58.784948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7141.793 ms, result 0 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94992 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94992 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 94992 ']' 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.493 09:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:23.493 [2024-07-24 09:51:01.056980] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:23.493 [2024-07-24 09:51:01.057119] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94992 ] 00:25:23.493 [2024-07-24 09:51:01.223065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.493 [2024-07-24 09:51:01.271184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.752 [2024-07-24 09:51:01.567676] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:23.752 [2024-07-24 09:51:01.567752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:24.013 [2024-07-24 09:51:01.711845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.711909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:24.013 [2024-07-24 09:51:01.711925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:24.013 [2024-07-24 09:51:01.711936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.712017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.712036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:24.013 [2024-07-24 09:51:01.712053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:25:24.013 [2024-07-24 09:51:01.712063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.712088] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:24.013 [2024-07-24 09:51:01.712374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:24.013 [2024-07-24 09:51:01.712401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.712411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:24.013 [2024-07-24 09:51:01.712422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:25:24.013 [2024-07-24 09:51:01.712432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.713920] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:24.013 [2024-07-24 09:51:01.716484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.716520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:24.013 [2024-07-24 09:51:01.716533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.569 ms 00:25:24.013 [2024-07-24 09:51:01.716553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.716635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.716648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:24.013 [2024-07-24 09:51:01.716659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:25:24.013 [2024-07-24 09:51:01.716686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.723610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.723647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:24.013 [2024-07-24 09:51:01.723660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.854 ms 00:25:24.013 [2024-07-24 09:51:01.723678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.723731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.723743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:24.013 [2024-07-24 09:51:01.723757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:24.013 [2024-07-24 09:51:01.723767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.723835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.723848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:24.013 [2024-07-24 09:51:01.723859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:24.013 [2024-07-24 09:51:01.723868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.723898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:24.013 [2024-07-24 09:51:01.725627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.725665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:24.013 [2024-07-24 09:51:01.725676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.741 ms 00:25:24.013 [2024-07-24 09:51:01.725687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.725719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.725730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:24.013 [2024-07-24 09:51:01.725740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:24.013 [2024-07-24 09:51:01.725750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.725775] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:24.013 [2024-07-24 09:51:01.725799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:24.013 [2024-07-24 09:51:01.725837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:24.013 [2024-07-24 09:51:01.725855] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:24.013 [2024-07-24 09:51:01.725940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:24.013 [2024-07-24 09:51:01.725953] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:24.013 [2024-07-24 09:51:01.725978] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:24.013 [2024-07-24 09:51:01.725991] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:24.013 [2024-07-24 09:51:01.726003] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:24.013 [2024-07-24 09:51:01.726018] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:24.013 [2024-07-24 09:51:01.726037] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:24.013 [2024-07-24 09:51:01.726047] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:24.013 [2024-07-24 09:51:01.726057] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:24.013 [2024-07-24 09:51:01.726067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.726077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:24.013 [2024-07-24 09:51:01.726087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:25:24.013 [2024-07-24 09:51:01.726098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.726185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.013 [2024-07-24 09:51:01.726196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:24.013 [2024-07-24 09:51:01.726231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:25:24.013 [2024-07-24 09:51:01.726242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.013 [2024-07-24 09:51:01.726349] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:24.013 [2024-07-24 09:51:01.726362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:24.013 [2024-07-24 09:51:01.726372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.013 [2024-07-24 09:51:01.726388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:24.013 [2024-07-24 09:51:01.726408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:24.013 [2024-07-24 09:51:01.726427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:24.013 [2024-07-24 09:51:01.726438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:24.013 [2024-07-24 09:51:01.726447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:24.013 [2024-07-24 09:51:01.726465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:24.013 [2024-07-24 09:51:01.726474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:24.013 [2024-07-24 09:51:01.726493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:24.013 [2024-07-24 09:51:01.726502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:24.013 [2024-07-24 09:51:01.726521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:24.013 [2024-07-24 09:51:01.726530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.013 [2024-07-24 09:51:01.726542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:24.013 [2024-07-24 09:51:01.726551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:24.013 [2024-07-24 09:51:01.726561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.013 [2024-07-24 09:51:01.726569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:24.014 [2024-07-24 09:51:01.726579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:24.014 [2024-07-24 09:51:01.726588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:24.014 [2024-07-24 09:51:01.726607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:24.014 [2024-07-24 09:51:01.726616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:24.014 [2024-07-24 09:51:01.726634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:24.014 [2024-07-24 09:51:01.726643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:24.014 [2024-07-24 09:51:01.726660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:24.014 [2024-07-24 09:51:01.726669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:24.014 [2024-07-24 09:51:01.726690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:24.014 [2024-07-24 09:51:01.726718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:24.014 [2024-07-24 09:51:01.726755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:24.014 [2024-07-24 09:51:01.726764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726773] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:24.014 [2024-07-24 09:51:01.726784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:24.014 [2024-07-24 09:51:01.726793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.014 [2024-07-24 09:51:01.726813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:24.014 [2024-07-24 09:51:01.726823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:24.014 [2024-07-24 09:51:01.726832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:24.014 [2024-07-24 09:51:01.726841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:24.014 [2024-07-24 09:51:01.726856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:24.014 [2024-07-24 09:51:01.726866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:24.014 [2024-07-24 09:51:01.726876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:24.014 [2024-07-24 09:51:01.726891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.726903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:24.014 [2024-07-24 09:51:01.726914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.726924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.726935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:24.014 [2024-07-24 09:51:01.726945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:24.014 [2024-07-24 09:51:01.726955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:24.014 [2024-07-24 09:51:01.726979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:24.014 [2024-07-24 09:51:01.726990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:24.014 [2024-07-24 09:51:01.727091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:24.014 [2024-07-24 09:51:01.727115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.014 [2024-07-24 09:51:01.727137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:24.014 [2024-07-24 09:51:01.727147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:24.014 [2024-07-24 09:51:01.727157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:24.014 [2024-07-24 09:51:01.727167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.014 [2024-07-24 09:51:01.727177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:24.014 [2024-07-24 09:51:01.727187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:25:24.014 [2024-07-24 09:51:01.727197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.014 [2024-07-24 09:51:01.727264] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:24.014 [2024-07-24 09:51:01.727278] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:27.342 [2024-07-24 09:51:04.753408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.753472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:27.342 [2024-07-24 09:51:04.753488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3031.054 ms 00:25:27.342 [2024-07-24 09:51:04.753499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.764269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.764314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:27.342 [2024-07-24 09:51:04.764330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.683 ms 00:25:27.342 [2024-07-24 09:51:04.764341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.764411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.764424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:27.342 [2024-07-24 09:51:04.764434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:27.342 [2024-07-24 09:51:04.764456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.774978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.775024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:27.342 [2024-07-24 09:51:04.775039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.483 ms 00:25:27.342 [2024-07-24 09:51:04.775053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.775094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.775105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:27.342 [2024-07-24 09:51:04.775115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:27.342 [2024-07-24 09:51:04.775125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.775641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.775656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:27.342 [2024-07-24 09:51:04.775676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:25:27.342 [2024-07-24 09:51:04.775686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.775733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.775745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:27.342 [2024-07-24 09:51:04.775755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:27.342 [2024-07-24 09:51:04.775765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.782837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.782877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:27.342 [2024-07-24 09:51:04.782890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.061 ms 00:25:27.342 [2024-07-24 09:51:04.782900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.785529] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:27.342 [2024-07-24 09:51:04.785569] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:27.342 [2024-07-24 09:51:04.785585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.785596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:27.342 [2024-07-24 09:51:04.785606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.589 ms 00:25:27.342 [2024-07-24 09:51:04.785616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.789153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.789313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:27.342 [2024-07-24 09:51:04.789396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.499 ms 00:25:27.342 [2024-07-24 09:51:04.789433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.790879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.791014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:27.342 [2024-07-24 09:51:04.791098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.389 ms 00:25:27.342 [2024-07-24 09:51:04.791133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.792503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.792635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:27.342 [2024-07-24 09:51:04.792717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.317 ms 00:25:27.342 [2024-07-24 09:51:04.792733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.793069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.793090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:27.342 [2024-07-24 09:51:04.793101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:25:27.342 [2024-07-24 09:51:04.793111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.822057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.822114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:27.342 [2024-07-24 09:51:04.822132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.969 ms 00:25:27.342 [2024-07-24 09:51:04.822142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.828590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:27.342 [2024-07-24 09:51:04.829546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.829575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:27.342 [2024-07-24 09:51:04.829588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.346 ms 00:25:27.342 [2024-07-24 09:51:04.829599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.829689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.829703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:27.342 [2024-07-24 09:51:04.829714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:27.342 [2024-07-24 09:51:04.829728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.829805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.829823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:27.342 [2024-07-24 09:51:04.829835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:25:27.342 [2024-07-24 09:51:04.829845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.342 [2024-07-24 09:51:04.829869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.342 [2024-07-24 09:51:04.829881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:27.342 [2024-07-24 09:51:04.829892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:27.342 [2024-07-24 09:51:04.829902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.343 [2024-07-24 09:51:04.829944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:27.343 [2024-07-24 09:51:04.829957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.343 [2024-07-24 09:51:04.829967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:27.343 [2024-07-24 09:51:04.829978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:27.343 [2024-07-24 09:51:04.829989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.343 [2024-07-24 09:51:04.833752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.343 [2024-07-24 09:51:04.833786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:27.343 [2024-07-24 09:51:04.833798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.728 ms 00:25:27.343 [2024-07-24 09:51:04.833809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.343 [2024-07-24 09:51:04.833884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.343 [2024-07-24 09:51:04.833895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:27.343 [2024-07-24 09:51:04.833906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:27.343 [2024-07-24 09:51:04.833916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.343 [2024-07-24 09:51:04.835190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3127.968 ms, result 0 00:25:27.343 [2024-07-24 09:51:04.850313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.343 [2024-07-24 09:51:04.866080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:27.343 [2024-07-24 09:51:04.874159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:27.908 09:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.908 09:51:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:27.908 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:27.908 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:27.908 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:27.908 [2024-07-24 09:51:05.589518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.908 [2024-07-24 09:51:05.589571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:27.909 [2024-07-24 09:51:05.589588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:27.909 [2024-07-24 09:51:05.589598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.909 [2024-07-24 09:51:05.589625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.909 [2024-07-24 09:51:05.589640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:27.909 [2024-07-24 09:51:05.589650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:27.909 [2024-07-24 09:51:05.589660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.909 [2024-07-24 09:51:05.589697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.909 [2024-07-24 09:51:05.589716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:27.909 [2024-07-24 09:51:05.589727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:27.909 [2024-07-24 09:51:05.589738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.909 [2024-07-24 09:51:05.589810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.289 ms, result 0 00:25:27.909 true 00:25:27.909 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.167 { 00:25:28.167 "name": "ftl", 00:25:28.167 "properties": [ 00:25:28.167 { 00:25:28.167 "name": "superblock_version", 00:25:28.167 "value": 5, 00:25:28.167 "read-only": true 00:25:28.167 }, 00:25:28.167 { 00:25:28.167 "name": "base_device", 00:25:28.167 "bands": [ 00:25:28.167 { 00:25:28.168 "id": 0, 00:25:28.168 "state": "CLOSED", 00:25:28.168 "validity": 1.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 1, 00:25:28.168 "state": "CLOSED", 00:25:28.168 "validity": 1.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 2, 00:25:28.168 "state": "CLOSED", 00:25:28.168 "validity": 0.007843137254901933 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 3, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 4, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 5, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 6, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 7, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 8, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 9, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 10, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 11, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 12, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 13, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 14, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 15, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 16, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 17, 00:25:28.168 "state": "FREE", 00:25:28.168 "validity": 0.0 00:25:28.168 } 00:25:28.168 ], 00:25:28.168 "read-only": true 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "name": "cache_device", 00:25:28.168 "type": "bdev", 00:25:28.168 "chunks": [ 00:25:28.168 { 00:25:28.168 "id": 0, 00:25:28.168 "state": "INACTIVE", 00:25:28.168 "utilization": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 1, 00:25:28.168 "state": "OPEN", 00:25:28.168 "utilization": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 2, 00:25:28.168 "state": "OPEN", 00:25:28.168 "utilization": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 3, 00:25:28.168 "state": "FREE", 00:25:28.168 "utilization": 0.0 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "id": 4, 00:25:28.168 "state": "FREE", 00:25:28.168 "utilization": 0.0 00:25:28.168 } 00:25:28.168 ], 00:25:28.168 "read-only": true 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "name": "verbose_mode", 00:25:28.168 "value": true, 00:25:28.168 "unit": "", 00:25:28.168 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:28.168 }, 00:25:28.168 { 00:25:28.168 "name": "prep_upgrade_on_shutdown", 00:25:28.168 "value": false, 00:25:28.168 "unit": "", 00:25:28.168 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:28.168 } 00:25:28.168 ] 00:25:28.168 } 00:25:28.168 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:28.168 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:28.168 09:51:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.427 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:28.428 Validate MD5 checksum, iteration 1 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:28.428 09:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.687 [2024-07-24 09:51:06.288064] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:28.687 [2024-07-24 09:51:06.288700] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95061 ] 00:25:28.687 [2024-07-24 09:51:06.458043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.945 [2024-07-24 09:51:06.507990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.560  Copying: 687/1024 [MB] (687 MBps) Copying: 1024/1024 [MB] (average 679 MBps) 00:25:31.560 00:25:31.560 09:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:31.560 09:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:33.505 Validate MD5 checksum, iteration 2 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6d4af546d4dbcbc10a9d1c9843bc01f 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6d4af546d4dbcbc10a9d1c9843bc01f != \b\6\d\4\a\f\5\4\6\d\4\d\b\c\b\c\1\0\a\9\d\1\c\9\8\4\3\b\c\0\1\f ]] 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:33.505 09:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:33.505 [2024-07-24 09:51:10.942610] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:33.505 [2024-07-24 09:51:10.942935] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95113 ] 00:25:33.505 [2024-07-24 09:51:11.110852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.505 [2024-07-24 09:51:11.159962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.778  Copying: 687/1024 [MB] (687 MBps) Copying: 1024/1024 [MB] (average 678 MBps) 00:25:38.778 00:25:38.778 09:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:38.778 09:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8ef9712543e795e02d025f9b7782b2ed 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8ef9712543e795e02d025f9b7782b2ed != \8\e\f\9\7\1\2\5\4\3\e\7\9\5\e\0\2\d\0\2\5\f\9\b\7\7\8\2\b\2\e\d ]] 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94992 ]] 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94992 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95190 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95190 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 95190 ']' 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.157 09:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:40.157 [2024-07-24 09:51:17.928419] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:40.157 [2024-07-24 09:51:17.928720] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95190 ] 00:25:40.425 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 94992 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:40.425 [2024-07-24 09:51:18.086999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.425 [2024-07-24 09:51:18.133015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.683 [2024-07-24 09:51:18.422336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:40.683 [2024-07-24 09:51:18.422407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:40.942 [2024-07-24 09:51:18.566459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.566519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:40.942 [2024-07-24 09:51:18.566542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:40.942 [2024-07-24 09:51:18.566560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.566635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.566649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:40.942 [2024-07-24 09:51:18.566660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:25:40.942 [2024-07-24 09:51:18.566670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.566693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:40.942 [2024-07-24 09:51:18.566991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:40.942 [2024-07-24 09:51:18.567011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.567022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:40.942 [2024-07-24 09:51:18.567032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:25:40.942 [2024-07-24 09:51:18.567042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.567391] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:40.942 [2024-07-24 09:51:18.571856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.571901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:40.942 [2024-07-24 09:51:18.571916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.472 ms 00:25:40.942 [2024-07-24 09:51:18.571927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.573447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.573482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:40.942 [2024-07-24 09:51:18.573495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:40.942 [2024-07-24 09:51:18.573509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.573919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.573946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:40.942 [2024-07-24 09:51:18.573957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:25:40.942 [2024-07-24 09:51:18.573967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.574010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.574022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:40.942 [2024-07-24 09:51:18.574044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:40.942 [2024-07-24 09:51:18.574061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.574095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.574105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:40.942 [2024-07-24 09:51:18.574115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:40.942 [2024-07-24 09:51:18.574125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.574150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:40.942 [2024-07-24 09:51:18.575110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.575137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:40.942 [2024-07-24 09:51:18.575148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.969 ms 00:25:40.942 [2024-07-24 09:51:18.575157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.575203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.575215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:40.942 [2024-07-24 09:51:18.575226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:40.942 [2024-07-24 09:51:18.575236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.575273] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:40.942 [2024-07-24 09:51:18.575296] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:40.942 [2024-07-24 09:51:18.575340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:40.942 [2024-07-24 09:51:18.575360] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:40.942 [2024-07-24 09:51:18.575443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:40.942 [2024-07-24 09:51:18.575457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:40.942 [2024-07-24 09:51:18.575470] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:40.942 [2024-07-24 09:51:18.575486] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:40.942 [2024-07-24 09:51:18.575498] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:40.942 [2024-07-24 09:51:18.575516] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:40.942 [2024-07-24 09:51:18.575532] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:40.942 [2024-07-24 09:51:18.575546] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:40.942 [2024-07-24 09:51:18.575555] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:40.942 [2024-07-24 09:51:18.575566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.575575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:40.942 [2024-07-24 09:51:18.575586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:25:40.942 [2024-07-24 09:51:18.575595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.575672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.942 [2024-07-24 09:51:18.575682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:40.942 [2024-07-24 09:51:18.575692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:40.942 [2024-07-24 09:51:18.575701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.942 [2024-07-24 09:51:18.575801] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:40.942 [2024-07-24 09:51:18.575818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:40.942 [2024-07-24 09:51:18.575829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:40.942 [2024-07-24 09:51:18.575839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.942 [2024-07-24 09:51:18.575849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:40.943 [2024-07-24 09:51:18.575859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.575869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:40.943 [2024-07-24 09:51:18.575879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:40.943 [2024-07-24 09:51:18.575889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:40.943 [2024-07-24 09:51:18.575899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.575908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:40.943 [2024-07-24 09:51:18.575917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:40.943 [2024-07-24 09:51:18.575926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.575935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:40.943 [2024-07-24 09:51:18.575945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:40.943 [2024-07-24 09:51:18.575954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.575963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:40.943 [2024-07-24 09:51:18.575976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:40.943 [2024-07-24 09:51:18.575985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.575994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:40.943 [2024-07-24 09:51:18.576003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:40.943 [2024-07-24 09:51:18.576030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:40.943 [2024-07-24 09:51:18.576057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:40.943 [2024-07-24 09:51:18.576084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:40.943 [2024-07-24 09:51:18.576111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:40.943 [2024-07-24 09:51:18.576141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:40.943 [2024-07-24 09:51:18.576170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:40.943 [2024-07-24 09:51:18.576462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:40.943 [2024-07-24 09:51:18.576491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576520] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:40.943 [2024-07-24 09:51:18.576550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:40.943 [2024-07-24 09:51:18.576626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:40.943 [2024-07-24 09:51:18.576691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:40.943 [2024-07-24 09:51:18.576721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:40.943 [2024-07-24 09:51:18.576759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:40.943 [2024-07-24 09:51:18.576789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:40.943 [2024-07-24 09:51:18.576817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:40.943 [2024-07-24 09:51:18.576899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:40.943 [2024-07-24 09:51:18.576939] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:40.943 [2024-07-24 09:51:18.576989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:40.943 [2024-07-24 09:51:18.577096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:40.943 [2024-07-24 09:51:18.577300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:40.943 [2024-07-24 09:51:18.577347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:40.943 [2024-07-24 09:51:18.577436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:40.943 [2024-07-24 09:51:18.577484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:40.943 [2024-07-24 09:51:18.577829] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:40.943 [2024-07-24 09:51:18.577842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:40.943 [2024-07-24 09:51:18.577863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:40.943 [2024-07-24 09:51:18.577873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:40.943 [2024-07-24 09:51:18.577884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:40.943 [2024-07-24 09:51:18.577897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.577908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:40.943 [2024-07-24 09:51:18.577919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.150 ms 00:25:40.943 [2024-07-24 09:51:18.577943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.943 [2024-07-24 09:51:18.588021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.588277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:40.943 [2024-07-24 09:51:18.588358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.021 ms 00:25:40.943 [2024-07-24 09:51:18.588401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.943 [2024-07-24 09:51:18.588494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.588526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:40.943 [2024-07-24 09:51:18.588558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:40.943 [2024-07-24 09:51:18.588641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.943 [2024-07-24 09:51:18.599396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.599619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:40.943 [2024-07-24 09:51:18.599738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.640 ms 00:25:40.943 [2024-07-24 09:51:18.599783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.943 [2024-07-24 09:51:18.599868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.599900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:40.943 [2024-07-24 09:51:18.599932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:40.943 [2024-07-24 09:51:18.600026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.943 [2024-07-24 09:51:18.600210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.943 [2024-07-24 09:51:18.600253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:40.943 [2024-07-24 09:51:18.600381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:25:40.943 [2024-07-24 09:51:18.600483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.600565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.600598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:40.944 [2024-07-24 09:51:18.600633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:40.944 [2024-07-24 09:51:18.600714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.607878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.608104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:40.944 [2024-07-24 09:51:18.608296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.121 ms 00:25:40.944 [2024-07-24 09:51:18.608378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.608540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.608630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:40.944 [2024-07-24 09:51:18.608668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:40.944 [2024-07-24 09:51:18.608728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.628648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.628944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:40.944 [2024-07-24 09:51:18.629065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.893 ms 00:25:40.944 [2024-07-24 09:51:18.629123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.631117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.631289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:40.944 [2024-07-24 09:51:18.631381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:25:40.944 [2024-07-24 09:51:18.631427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.654265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.654556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:40.944 [2024-07-24 09:51:18.654754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.766 ms 00:25:40.944 [2024-07-24 09:51:18.654794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.654977] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:40.944 [2024-07-24 09:51:18.655199] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:40.944 [2024-07-24 09:51:18.655339] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:40.944 [2024-07-24 09:51:18.655544] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:40.944 [2024-07-24 09:51:18.655594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.655624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:40.944 [2024-07-24 09:51:18.655655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:25:40.944 [2024-07-24 09:51:18.655738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.655856] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:40.944 [2024-07-24 09:51:18.655910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.656008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:40.944 [2024-07-24 09:51:18.656040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:25:40.944 [2024-07-24 09:51:18.656110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.659754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.659925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:40.944 [2024-07-24 09:51:18.659957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.546 ms 00:25:40.944 [2024-07-24 09:51:18.659968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.661054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.944 [2024-07-24 09:51:18.661086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:40.944 [2024-07-24 09:51:18.661098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:40.944 [2024-07-24 09:51:18.661108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.944 [2024-07-24 09:51:18.661366] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:25:41.511 [2024-07-24 09:51:19.203757] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:25:41.511 [2024-07-24 09:51:19.203895] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:25:42.077 [2024-07-24 09:51:19.715404] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:25:42.077 [2024-07-24 09:51:19.715522] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:42.077 [2024-07-24 09:51:19.715539] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:42.077 [2024-07-24 09:51:19.715553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.077 [2024-07-24 09:51:19.715564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:42.078 [2024-07-24 09:51:19.715578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1056.110 ms 00:25:42.078 [2024-07-24 09:51:19.715589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.715625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.715645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:42.078 [2024-07-24 09:51:19.715655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:42.078 [2024-07-24 09:51:19.715665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.722651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:42.078 [2024-07-24 09:51:19.722787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.722799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:42.078 [2024-07-24 09:51:19.722812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.115 ms 00:25:42.078 [2024-07-24 09:51:19.722827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.723474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.723503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:25:42.078 [2024-07-24 09:51:19.723515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:25:42.078 [2024-07-24 09:51:19.723525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:42.078 [2024-07-24 09:51:19.725474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.895 ms 00:25:42.078 [2024-07-24 09:51:19.725489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:25:42.078 [2024-07-24 09:51:19.725561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:42.078 [2024-07-24 09:51:19.725571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:42.078 [2024-07-24 09:51:19.725699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:25:42.078 [2024-07-24 09:51:19.725709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:42.078 [2024-07-24 09:51:19.725754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:42.078 [2024-07-24 09:51:19.725770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:42.078 [2024-07-24 09:51:19.725822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:42.078 [2024-07-24 09:51:19.725842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:42.078 [2024-07-24 09:51:19.725852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.725909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.078 [2024-07-24 09:51:19.725921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:42.078 [2024-07-24 09:51:19.725931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:25:42.078 [2024-07-24 09:51:19.725941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.078 [2024-07-24 09:51:19.726998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1162.012 ms, result 0 00:25:42.078 [2024-07-24 09:51:19.739352] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.078 [2024-07-24 09:51:19.755314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:42.078 [2024-07-24 09:51:19.763395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:42.646 Validate MD5 checksum, iteration 1 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:42.646 09:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:42.905 [2024-07-24 09:51:20.498422] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:42.905 [2024-07-24 09:51:20.498777] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95226 ] 00:25:42.905 [2024-07-24 09:51:20.664998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.905 [2024-07-24 09:51:20.711317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.650  Copying: 629/1024 [MB] (629 MBps) Copying: 1024/1024 [MB] (average 646 MBps) 00:25:45.650 00:25:45.650 09:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:45.650 09:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:47.579 Validate MD5 checksum, iteration 2 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6d4af546d4dbcbc10a9d1c9843bc01f 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6d4af546d4dbcbc10a9d1c9843bc01f != \b\6\d\4\a\f\5\4\6\d\4\d\b\c\b\c\1\0\a\9\d\1\c\9\8\4\3\b\c\0\1\f ]] 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:47.579 09:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:47.579 [2024-07-24 09:51:25.149184] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:47.579 [2024-07-24 09:51:25.149486] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95275 ] 00:25:47.579 [2024-07-24 09:51:25.317253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.579 [2024-07-24 09:51:25.364340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.096  Copying: 710/1024 [MB] (710 MBps) Copying: 1024/1024 [MB] (average 711 MBps) 00:25:50.096 00:25:50.096 09:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:50.096 09:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8ef9712543e795e02d025f9b7782b2ed 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8ef9712543e795e02d025f9b7782b2ed != \8\e\f\9\7\1\2\5\4\3\e\7\9\5\e\0\2\d\0\2\5\f\9\b\7\7\8\2\b\2\e\d ]] 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95190 ]] 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95190 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 95190 ']' 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 95190 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95190 00:25:52.001 killing process with pid 95190 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95190' 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 95190 00:25:52.001 09:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 95190 00:25:52.001 [2024-07-24 09:51:29.768033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:52.001 [2024-07-24 09:51:29.772632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.772676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:52.001 [2024-07-24 09:51:29.772692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:52.001 [2024-07-24 09:51:29.772703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.772732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:52.001 [2024-07-24 09:51:29.773589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.773674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:52.001 [2024-07-24 09:51:29.773745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:25:52.001 [2024-07-24 09:51:29.773810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.774045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.774126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:52.001 [2024-07-24 09:51:29.774206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:25:52.001 [2024-07-24 09:51:29.774306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.775430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.775549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:52.001 [2024-07-24 09:51:29.775640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:25:52.001 [2024-07-24 09:51:29.775656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.776689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.776783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:52.001 [2024-07-24 09:51:29.776889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.996 ms 00:25:52.001 [2024-07-24 09:51:29.776961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.778399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.778523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:52.001 [2024-07-24 09:51:29.778594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.349 ms 00:25:52.001 [2024-07-24 09:51:29.778691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.780081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.780213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:52.001 [2024-07-24 09:51:29.780233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.323 ms 00:25:52.001 [2024-07-24 09:51:29.780244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.780332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.780344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:52.001 [2024-07-24 09:51:29.780360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:25:52.001 [2024-07-24 09:51:29.780370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.781536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.781569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:52.001 [2024-07-24 09:51:29.781581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.149 ms 00:25:52.001 [2024-07-24 09:51:29.781590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.782877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.782911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:52.001 [2024-07-24 09:51:29.782923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:25:52.001 [2024-07-24 09:51:29.782933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.784127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.784160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:52.001 [2024-07-24 09:51:29.784172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.166 ms 00:25:52.001 [2024-07-24 09:51:29.784181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.785364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.001 [2024-07-24 09:51:29.785397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:52.001 [2024-07-24 09:51:29.785409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:25:52.001 [2024-07-24 09:51:29.785429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.001 [2024-07-24 09:51:29.785460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:52.001 [2024-07-24 09:51:29.785475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:52.001 [2024-07-24 09:51:29.785487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:52.001 [2024-07-24 09:51:29.785498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:52.001 [2024-07-24 09:51:29.785510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.001 [2024-07-24 09:51:29.785521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.001 [2024-07-24 09:51:29.785531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.001 [2024-07-24 09:51:29.785542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.001 [2024-07-24 09:51:29.785552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.002 [2024-07-24 09:51:29.785669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:52.002 [2024-07-24 09:51:29.785678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fd2e273c-18e7-4f40-ac1a-ad70a9b87133 00:25:52.002 [2024-07-24 09:51:29.785689] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:52.002 [2024-07-24 09:51:29.785699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:52.002 [2024-07-24 09:51:29.785709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:52.002 [2024-07-24 09:51:29.785718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:52.002 [2024-07-24 09:51:29.785728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:52.002 [2024-07-24 09:51:29.785741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:52.002 [2024-07-24 09:51:29.785751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:52.002 [2024-07-24 09:51:29.785760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:52.002 [2024-07-24 09:51:29.785769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:52.002 [2024-07-24 09:51:29.785779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.002 [2024-07-24 09:51:29.785789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:52.002 [2024-07-24 09:51:29.785800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:25:52.002 [2024-07-24 09:51:29.785809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.787465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.002 [2024-07-24 09:51:29.787485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:52.002 [2024-07-24 09:51:29.787496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.641 ms 00:25:52.002 [2024-07-24 09:51:29.787511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.787613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.002 [2024-07-24 09:51:29.787623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:52.002 [2024-07-24 09:51:29.787634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:25:52.002 [2024-07-24 09:51:29.787643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.794541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.794578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:52.002 [2024-07-24 09:51:29.794595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.794605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.794633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.794643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:52.002 [2024-07-24 09:51:29.794654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.794663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.794740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.794753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:52.002 [2024-07-24 09:51:29.794763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.794777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.794796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.794807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:52.002 [2024-07-24 09:51:29.794817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.794826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.807569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.807611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:52.002 [2024-07-24 09:51:29.807629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.807640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.815763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.815803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:52.002 [2024-07-24 09:51:29.815816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.815826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.815892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.815904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:52.002 [2024-07-24 09:51:29.815914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.815924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.815966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.815978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:52.002 [2024-07-24 09:51:29.815988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.815998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.816073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.816086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:52.002 [2024-07-24 09:51:29.816096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.816106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.816139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.816155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:52.002 [2024-07-24 09:51:29.816166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.816175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.816232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.816244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:52.002 [2024-07-24 09:51:29.816254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.816264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.816311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:52.002 [2024-07-24 09:51:29.816323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:52.002 [2024-07-24 09:51:29.816333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:52.002 [2024-07-24 09:51:29.816343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.002 [2024-07-24 09:51:29.816468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.870 ms, result 0 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:52.262 Remove shared memory files 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94992 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:52.262 ************************************ 00:25:52.262 END TEST ftl_upgrade_shutdown 00:25:52.262 ************************************ 00:25:52.262 00:25:52.262 real 1m7.812s 00:25:52.262 user 1m29.973s 00:25:52.262 sys 0m20.440s 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:52.262 09:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 09:51:30 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:25:52.521 09:51:30 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:52.521 09:51:30 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:25:52.521 09:51:30 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.521 09:51:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:52.521 ************************************ 00:25:52.521 START TEST ftl_restore_fast 00:25:52.521 ************************************ 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:52.521 * Looking for test storage... 00:25:52.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Hlac44rWXP 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95397 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95397 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 95397 ']' 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.521 09:51:30 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:52.780 [2024-07-24 09:51:30.413350] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:25:52.780 [2024-07-24 09:51:30.413920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95397 ] 00:25:52.780 [2024-07-24 09:51:30.569174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.038 [2024-07-24 09:51:30.629768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:25:53.605 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:25:53.864 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:54.123 { 00:25:54.123 "name": "nvme0n1", 00:25:54.123 "aliases": [ 00:25:54.123 "4a83077c-ce78-4ba3-9946-4771bd28e9cb" 00:25:54.123 ], 00:25:54.123 "product_name": "NVMe disk", 00:25:54.123 "block_size": 4096, 00:25:54.123 "num_blocks": 1310720, 00:25:54.123 "uuid": "4a83077c-ce78-4ba3-9946-4771bd28e9cb", 00:25:54.123 "assigned_rate_limits": { 00:25:54.123 "rw_ios_per_sec": 0, 00:25:54.123 "rw_mbytes_per_sec": 0, 00:25:54.123 "r_mbytes_per_sec": 0, 00:25:54.123 "w_mbytes_per_sec": 0 00:25:54.123 }, 00:25:54.123 "claimed": true, 00:25:54.123 "claim_type": "read_many_write_one", 00:25:54.123 "zoned": false, 00:25:54.123 "supported_io_types": { 00:25:54.123 "read": true, 00:25:54.123 "write": true, 00:25:54.123 "unmap": true, 00:25:54.123 "flush": true, 00:25:54.123 "reset": true, 00:25:54.123 "nvme_admin": true, 00:25:54.123 "nvme_io": true, 00:25:54.123 "nvme_io_md": false, 00:25:54.123 "write_zeroes": true, 00:25:54.123 "zcopy": false, 00:25:54.123 "get_zone_info": false, 00:25:54.123 "zone_management": false, 00:25:54.123 "zone_append": false, 00:25:54.123 "compare": true, 00:25:54.123 "compare_and_write": false, 00:25:54.123 "abort": true, 00:25:54.123 "seek_hole": false, 00:25:54.123 "seek_data": false, 00:25:54.123 "copy": true, 00:25:54.123 "nvme_iov_md": false 00:25:54.123 }, 00:25:54.123 "driver_specific": { 00:25:54.123 "nvme": [ 00:25:54.123 { 00:25:54.123 "pci_address": "0000:00:11.0", 00:25:54.123 "trid": { 00:25:54.123 "trtype": "PCIe", 00:25:54.123 "traddr": "0000:00:11.0" 00:25:54.123 }, 00:25:54.123 "ctrlr_data": { 00:25:54.123 "cntlid": 0, 00:25:54.123 "vendor_id": "0x1b36", 00:25:54.123 "model_number": "QEMU NVMe Ctrl", 00:25:54.123 "serial_number": "12341", 00:25:54.123 "firmware_revision": "8.0.0", 00:25:54.123 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:54.123 "oacs": { 00:25:54.123 "security": 0, 00:25:54.123 "format": 1, 00:25:54.123 "firmware": 0, 00:25:54.123 "ns_manage": 1 00:25:54.123 }, 00:25:54.123 "multi_ctrlr": false, 00:25:54.123 "ana_reporting": false 00:25:54.123 }, 00:25:54.123 "vs": { 00:25:54.123 "nvme_version": "1.4" 00:25:54.123 }, 00:25:54.123 "ns_data": { 00:25:54.123 "id": 1, 00:25:54.123 "can_share": false 00:25:54.123 } 00:25:54.123 } 00:25:54.123 ], 00:25:54.123 "mp_policy": "active_passive" 00:25:54.123 } 00:25:54.123 } 00:25:54.123 ]' 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:54.123 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:54.382 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=fb3ec3fc-dedf-42a8-b546-5c8add84dacc 00:25:54.382 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:25:54.382 09:51:31 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb3ec3fc-dedf-42a8-b546-5c8add84dacc 00:25:54.640 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:54.640 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=7aefb964-ed09-4699-beb0-804ac75da744 00:25:54.640 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7aefb964-ed09-4699-beb0-804ac75da744 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:25:54.899 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:55.157 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:55.157 { 00:25:55.157 "name": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:55.157 "aliases": [ 00:25:55.157 "lvs/nvme0n1p0" 00:25:55.157 ], 00:25:55.157 "product_name": "Logical Volume", 00:25:55.157 "block_size": 4096, 00:25:55.157 "num_blocks": 26476544, 00:25:55.157 "uuid": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:55.157 "assigned_rate_limits": { 00:25:55.157 "rw_ios_per_sec": 0, 00:25:55.157 "rw_mbytes_per_sec": 0, 00:25:55.157 "r_mbytes_per_sec": 0, 00:25:55.157 "w_mbytes_per_sec": 0 00:25:55.157 }, 00:25:55.157 "claimed": false, 00:25:55.157 "zoned": false, 00:25:55.157 "supported_io_types": { 00:25:55.157 "read": true, 00:25:55.157 "write": true, 00:25:55.157 "unmap": true, 00:25:55.157 "flush": false, 00:25:55.157 "reset": true, 00:25:55.157 "nvme_admin": false, 00:25:55.157 "nvme_io": false, 00:25:55.157 "nvme_io_md": false, 00:25:55.157 "write_zeroes": true, 00:25:55.157 "zcopy": false, 00:25:55.157 "get_zone_info": false, 00:25:55.157 "zone_management": false, 00:25:55.157 "zone_append": false, 00:25:55.157 "compare": false, 00:25:55.157 "compare_and_write": false, 00:25:55.157 "abort": false, 00:25:55.157 "seek_hole": true, 00:25:55.157 "seek_data": true, 00:25:55.157 "copy": false, 00:25:55.157 "nvme_iov_md": false 00:25:55.157 }, 00:25:55.157 "driver_specific": { 00:25:55.157 "lvol": { 00:25:55.157 "lvol_store_uuid": "7aefb964-ed09-4699-beb0-804ac75da744", 00:25:55.157 "base_bdev": "nvme0n1", 00:25:55.157 "thin_provision": true, 00:25:55.157 "num_allocated_clusters": 0, 00:25:55.157 "snapshot": false, 00:25:55.157 "clone": false, 00:25:55.157 "esnap_clone": false 00:25:55.157 } 00:25:55.157 } 00:25:55.157 } 00:25:55.157 ]' 00:25:55.157 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:55.157 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:25:55.157 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:25:55.416 09:51:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:55.674 { 00:25:55.674 "name": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:55.674 "aliases": [ 00:25:55.674 "lvs/nvme0n1p0" 00:25:55.674 ], 00:25:55.674 "product_name": "Logical Volume", 00:25:55.674 "block_size": 4096, 00:25:55.674 "num_blocks": 26476544, 00:25:55.674 "uuid": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:55.674 "assigned_rate_limits": { 00:25:55.674 "rw_ios_per_sec": 0, 00:25:55.674 "rw_mbytes_per_sec": 0, 00:25:55.674 "r_mbytes_per_sec": 0, 00:25:55.674 "w_mbytes_per_sec": 0 00:25:55.674 }, 00:25:55.674 "claimed": false, 00:25:55.674 "zoned": false, 00:25:55.674 "supported_io_types": { 00:25:55.674 "read": true, 00:25:55.674 "write": true, 00:25:55.674 "unmap": true, 00:25:55.674 "flush": false, 00:25:55.674 "reset": true, 00:25:55.674 "nvme_admin": false, 00:25:55.674 "nvme_io": false, 00:25:55.674 "nvme_io_md": false, 00:25:55.674 "write_zeroes": true, 00:25:55.674 "zcopy": false, 00:25:55.674 "get_zone_info": false, 00:25:55.674 "zone_management": false, 00:25:55.674 "zone_append": false, 00:25:55.674 "compare": false, 00:25:55.674 "compare_and_write": false, 00:25:55.674 "abort": false, 00:25:55.674 "seek_hole": true, 00:25:55.674 "seek_data": true, 00:25:55.674 "copy": false, 00:25:55.674 "nvme_iov_md": false 00:25:55.674 }, 00:25:55.674 "driver_specific": { 00:25:55.674 "lvol": { 00:25:55.674 "lvol_store_uuid": "7aefb964-ed09-4699-beb0-804ac75da744", 00:25:55.674 "base_bdev": "nvme0n1", 00:25:55.674 "thin_provision": true, 00:25:55.674 "num_allocated_clusters": 0, 00:25:55.674 "snapshot": false, 00:25:55.674 "clone": false, 00:25:55.674 "esnap_clone": false 00:25:55.674 } 00:25:55.674 } 00:25:55.674 } 00:25:55.674 ]' 00:25:55.674 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:25:55.932 09:51:33 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0dd43dc-0605-44cf-ae04-6d7875b07649 00:25:56.191 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:56.191 { 00:25:56.191 "name": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:56.191 "aliases": [ 00:25:56.191 "lvs/nvme0n1p0" 00:25:56.191 ], 00:25:56.191 "product_name": "Logical Volume", 00:25:56.191 "block_size": 4096, 00:25:56.191 "num_blocks": 26476544, 00:25:56.191 "uuid": "b0dd43dc-0605-44cf-ae04-6d7875b07649", 00:25:56.191 "assigned_rate_limits": { 00:25:56.191 "rw_ios_per_sec": 0, 00:25:56.191 "rw_mbytes_per_sec": 0, 00:25:56.191 "r_mbytes_per_sec": 0, 00:25:56.191 "w_mbytes_per_sec": 0 00:25:56.191 }, 00:25:56.191 "claimed": false, 00:25:56.191 "zoned": false, 00:25:56.191 "supported_io_types": { 00:25:56.191 "read": true, 00:25:56.191 "write": true, 00:25:56.191 "unmap": true, 00:25:56.191 "flush": false, 00:25:56.191 "reset": true, 00:25:56.191 "nvme_admin": false, 00:25:56.191 "nvme_io": false, 00:25:56.191 "nvme_io_md": false, 00:25:56.191 "write_zeroes": true, 00:25:56.191 "zcopy": false, 00:25:56.191 "get_zone_info": false, 00:25:56.191 "zone_management": false, 00:25:56.191 "zone_append": false, 00:25:56.191 "compare": false, 00:25:56.191 "compare_and_write": false, 00:25:56.191 "abort": false, 00:25:56.191 "seek_hole": true, 00:25:56.191 "seek_data": true, 00:25:56.191 "copy": false, 00:25:56.191 "nvme_iov_md": false 00:25:56.191 }, 00:25:56.191 "driver_specific": { 00:25:56.191 "lvol": { 00:25:56.191 "lvol_store_uuid": "7aefb964-ed09-4699-beb0-804ac75da744", 00:25:56.191 "base_bdev": "nvme0n1", 00:25:56.191 "thin_provision": true, 00:25:56.191 "num_allocated_clusters": 0, 00:25:56.191 "snapshot": false, 00:25:56.191 "clone": false, 00:25:56.191 "esnap_clone": false 00:25:56.191 } 00:25:56.191 } 00:25:56.191 } 00:25:56.191 ]' 00:25:56.192 09:51:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b0dd43dc-0605-44cf-ae04-6d7875b07649 --l2p_dram_limit 10' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:25:56.450 09:51:34 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b0dd43dc-0605-44cf-ae04-6d7875b07649 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:25:56.450 [2024-07-24 09:51:34.265502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.450 [2024-07-24 09:51:34.265564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.450 [2024-07-24 09:51:34.265584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.450 [2024-07-24 09:51:34.265595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.450 [2024-07-24 09:51:34.265662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.450 [2024-07-24 09:51:34.265678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.450 [2024-07-24 09:51:34.265692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:56.450 [2024-07-24 09:51:34.265702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.451 [2024-07-24 09:51:34.265730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.451 [2024-07-24 09:51:34.266067] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.451 [2024-07-24 09:51:34.266097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.451 [2024-07-24 09:51:34.266109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.451 [2024-07-24 09:51:34.266123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:25:56.451 [2024-07-24 09:51:34.266133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.451 [2024-07-24 09:51:34.266251] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5933179e-c109-4606-8639-197609436d36 00:25:56.710 [2024-07-24 09:51:34.267683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.267721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:56.710 [2024-07-24 09:51:34.267734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:56.710 [2024-07-24 09:51:34.267746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.275478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.275520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.710 [2024-07-24 09:51:34.275533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:25:56.710 [2024-07-24 09:51:34.275550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.275636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.275660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.710 [2024-07-24 09:51:34.275671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:56.710 [2024-07-24 09:51:34.275685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.275753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.275767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.710 [2024-07-24 09:51:34.275778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:56.710 [2024-07-24 09:51:34.275791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.275815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.710 [2024-07-24 09:51:34.277793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.277824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.710 [2024-07-24 09:51:34.277840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.984 ms 00:25:56.710 [2024-07-24 09:51:34.277851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.277900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.277917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.710 [2024-07-24 09:51:34.277943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:56.710 [2024-07-24 09:51:34.277954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.277990] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:56.710 [2024-07-24 09:51:34.278125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.710 [2024-07-24 09:51:34.278143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.710 [2024-07-24 09:51:34.278158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:56.710 [2024-07-24 09:51:34.278173] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.710 [2024-07-24 09:51:34.278185] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.710 [2024-07-24 09:51:34.278223] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:56.710 [2024-07-24 09:51:34.278233] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.710 [2024-07-24 09:51:34.278245] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.710 [2024-07-24 09:51:34.278255] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.710 [2024-07-24 09:51:34.278277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.278288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.710 [2024-07-24 09:51:34.278300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:56.710 [2024-07-24 09:51:34.278310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.278384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.710 [2024-07-24 09:51:34.278395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.710 [2024-07-24 09:51:34.278414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:56.710 [2024-07-24 09:51:34.278424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.710 [2024-07-24 09:51:34.278518] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.710 [2024-07-24 09:51:34.278531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.710 [2024-07-24 09:51:34.278544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.710 [2024-07-24 09:51:34.278558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.710 [2024-07-24 09:51:34.278570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.710 [2024-07-24 09:51:34.278580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.710 [2024-07-24 09:51:34.278592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.711 [2024-07-24 09:51:34.278613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.711 [2024-07-24 09:51:34.278633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.711 [2024-07-24 09:51:34.278643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:56.711 [2024-07-24 09:51:34.278655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.711 [2024-07-24 09:51:34.278664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.711 [2024-07-24 09:51:34.278679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:56.711 [2024-07-24 09:51:34.278688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.711 [2024-07-24 09:51:34.278708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.711 [2024-07-24 09:51:34.278742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.711 [2024-07-24 09:51:34.278771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.711 [2024-07-24 09:51:34.278805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.711 [2024-07-24 09:51:34.278834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.711 [2024-07-24 09:51:34.278858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.711 [2024-07-24 09:51:34.278869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.711 [2024-07-24 09:51:34.278890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.711 [2024-07-24 09:51:34.278899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:56.711 [2024-07-24 09:51:34.278910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.711 [2024-07-24 09:51:34.278919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.711 [2024-07-24 09:51:34.278931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:56.711 [2024-07-24 09:51:34.278940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.711 [2024-07-24 09:51:34.278960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:56.711 [2024-07-24 09:51:34.278971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.711 [2024-07-24 09:51:34.278980] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.711 [2024-07-24 09:51:34.278992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.711 [2024-07-24 09:51:34.279001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.711 [2024-07-24 09:51:34.279023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.711 [2024-07-24 09:51:34.279038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.711 [2024-07-24 09:51:34.279050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.711 [2024-07-24 09:51:34.279060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.711 [2024-07-24 09:51:34.279073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.711 [2024-07-24 09:51:34.279082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.711 [2024-07-24 09:51:34.279093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.711 [2024-07-24 09:51:34.279107] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.711 [2024-07-24 09:51:34.279122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:56.711 [2024-07-24 09:51:34.279147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:56.711 [2024-07-24 09:51:34.279157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:56.711 [2024-07-24 09:51:34.279170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:56.711 [2024-07-24 09:51:34.279180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:56.711 [2024-07-24 09:51:34.279203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:56.711 [2024-07-24 09:51:34.279215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:56.711 [2024-07-24 09:51:34.279230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:56.711 [2024-07-24 09:51:34.279240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:56.711 [2024-07-24 09:51:34.279254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:56.711 [2024-07-24 09:51:34.279310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.711 [2024-07-24 09:51:34.279332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.711 [2024-07-24 09:51:34.279363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.711 [2024-07-24 09:51:34.279373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.711 [2024-07-24 09:51:34.279386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.711 [2024-07-24 09:51:34.279397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.711 [2024-07-24 09:51:34.279410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.711 [2024-07-24 09:51:34.279420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:25:56.711 [2024-07-24 09:51:34.279435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.711 [2024-07-24 09:51:34.279477] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:56.711 [2024-07-24 09:51:34.279492] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:59.996 [2024-07-24 09:51:37.345646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.345913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:59.996 [2024-07-24 09:51:37.346054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3071.142 ms 00:25:59.996 [2024-07-24 09:51:37.346103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.358152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.358410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.996 [2024-07-24 09:51:37.358532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.936 ms 00:25:59.996 [2024-07-24 09:51:37.358583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.358719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.358761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:59.996 [2024-07-24 09:51:37.358860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:59.996 [2024-07-24 09:51:37.358902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.370130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.370363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.996 [2024-07-24 09:51:37.370495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.071 ms 00:25:59.996 [2024-07-24 09:51:37.370564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.370636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.370675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.996 [2024-07-24 09:51:37.370763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.996 [2024-07-24 09:51:37.370807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.371350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.371406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.996 [2024-07-24 09:51:37.371586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:25:59.996 [2024-07-24 09:51:37.371635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.996 [2024-07-24 09:51:37.371770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.996 [2024-07-24 09:51:37.371828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.996 [2024-07-24 09:51:37.371901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:59.996 [2024-07-24 09:51:37.371935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.379333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.379489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.997 [2024-07-24 09:51:37.379620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.350 ms 00:25:59.997 [2024-07-24 09:51:37.379665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.388180] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:59.997 [2024-07-24 09:51:37.391758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.391886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:59.997 [2024-07-24 09:51:37.391970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.002 ms 00:25:59.997 [2024-07-24 09:51:37.392025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.473256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.473487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:59.997 [2024-07-24 09:51:37.473520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.223 ms 00:25:59.997 [2024-07-24 09:51:37.473532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.473731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.473744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:59.997 [2024-07-24 09:51:37.473759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:59.997 [2024-07-24 09:51:37.473770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.477281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.477321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:59.997 [2024-07-24 09:51:37.477342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:25:59.997 [2024-07-24 09:51:37.477362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.480128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.480163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:59.997 [2024-07-24 09:51:37.480180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:25:59.997 [2024-07-24 09:51:37.480199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.480497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.480512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:59.997 [2024-07-24 09:51:37.480526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:25:59.997 [2024-07-24 09:51:37.480536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.521126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.521364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:59.997 [2024-07-24 09:51:37.521448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.619 ms 00:25:59.997 [2024-07-24 09:51:37.521485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.526209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.526364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:59.997 [2024-07-24 09:51:37.526457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:25:59.997 [2024-07-24 09:51:37.526521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.529844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.529978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:59.997 [2024-07-24 09:51:37.530058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:25:59.997 [2024-07-24 09:51:37.530093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.533729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.533871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:59.997 [2024-07-24 09:51:37.533954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:25:59.997 [2024-07-24 09:51:37.533991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.534103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.534176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:59.997 [2024-07-24 09:51:37.534288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:59.997 [2024-07-24 09:51:37.534304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.534377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.997 [2024-07-24 09:51:37.534389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:59.997 [2024-07-24 09:51:37.534406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:59.997 [2024-07-24 09:51:37.534416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.997 [2024-07-24 09:51:37.535452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3274.823 ms, result 0 00:25:59.997 { 00:25:59.997 "name": "ftl0", 00:25:59.997 "uuid": "5933179e-c109-4606-8639-197609436d36" 00:25:59.997 } 00:25:59.997 09:51:37 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:59.997 09:51:37 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:59.997 09:51:37 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:25:59.997 09:51:37 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:00.258 [2024-07-24 09:51:37.928587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.928656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:00.258 [2024-07-24 09:51:37.928673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:00.258 [2024-07-24 09:51:37.928687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.928716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:00.258 [2024-07-24 09:51:37.929469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.929500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:00.258 [2024-07-24 09:51:37.929521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:26:00.258 [2024-07-24 09:51:37.929532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.929779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.929796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:00.258 [2024-07-24 09:51:37.929818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:00.258 [2024-07-24 09:51:37.929829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.932555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.932578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:00.258 [2024-07-24 09:51:37.932593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:26:00.258 [2024-07-24 09:51:37.932603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.937983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.938017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:00.258 [2024-07-24 09:51:37.938032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:26:00.258 [2024-07-24 09:51:37.938045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.939786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.939825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:00.258 [2024-07-24 09:51:37.939843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:26:00.258 [2024-07-24 09:51:37.939853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.944245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.944285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:00.258 [2024-07-24 09:51:37.944302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:26:00.258 [2024-07-24 09:51:37.944313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.944433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.944464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:00.258 [2024-07-24 09:51:37.944477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:00.258 [2024-07-24 09:51:37.944487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.946353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.946386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:00.258 [2024-07-24 09:51:37.946400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:26:00.258 [2024-07-24 09:51:37.946409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.947607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.947642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:00.258 [2024-07-24 09:51:37.947658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:26:00.258 [2024-07-24 09:51:37.947668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.948675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.948708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:00.258 [2024-07-24 09:51:37.948723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:26:00.258 [2024-07-24 09:51:37.948732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.949750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.258 [2024-07-24 09:51:37.949782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:00.258 [2024-07-24 09:51:37.949798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:26:00.258 [2024-07-24 09:51:37.949807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.258 [2024-07-24 09:51:37.949840] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:00.258 [2024-07-24 09:51:37.949857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.949992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:00.258 [2024-07-24 09:51:37.950255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.950991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:00.259 [2024-07-24 09:51:37.951149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:00.259 [2024-07-24 09:51:37.951164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5933179e-c109-4606-8639-197609436d36 00:26:00.259 [2024-07-24 09:51:37.951175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:00.259 [2024-07-24 09:51:37.951197] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:00.259 [2024-07-24 09:51:37.951207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:00.259 [2024-07-24 09:51:37.951219] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:00.259 [2024-07-24 09:51:37.951232] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:00.259 [2024-07-24 09:51:37.951246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:00.259 [2024-07-24 09:51:37.951255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:00.259 [2024-07-24 09:51:37.951267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:00.259 [2024-07-24 09:51:37.951276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:00.259 [2024-07-24 09:51:37.951288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.259 [2024-07-24 09:51:37.951298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:00.259 [2024-07-24 09:51:37.951311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:26:00.259 [2024-07-24 09:51:37.951321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.259 [2024-07-24 09:51:37.953144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.259 [2024-07-24 09:51:37.953165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:00.259 [2024-07-24 09:51:37.953185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:26:00.259 [2024-07-24 09:51:37.953195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.953326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.260 [2024-07-24 09:51:37.953339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:00.260 [2024-07-24 09:51:37.953352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:00.260 [2024-07-24 09:51:37.953361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.960504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.960638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.260 [2024-07-24 09:51:37.960716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.960755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.960839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.960873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.260 [2024-07-24 09:51:37.960921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.960952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.961101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.961224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.260 [2024-07-24 09:51:37.961276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.961314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.961365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.961398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.260 [2024-07-24 09:51:37.961484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.961527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.975387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.975590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.260 [2024-07-24 09:51:37.975693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.975729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.984063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.984245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.260 [2024-07-24 09:51:37.984329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.984365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.984476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.984512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.260 [2024-07-24 09:51:37.984549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.984583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.984716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.984755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.260 [2024-07-24 09:51:37.984788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.984827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.985062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.985098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.260 [2024-07-24 09:51:37.985131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.985218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.985358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.985397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:00.260 [2024-07-24 09:51:37.985472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.985543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.985614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.985683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.260 [2024-07-24 09:51:37.985726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.985756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.985909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.260 [2024-07-24 09:51:37.985944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.260 [2024-07-24 09:51:37.985976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.260 [2024-07-24 09:51:37.986006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.260 [2024-07-24 09:51:37.986175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.645 ms, result 0 00:26:00.260 true 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95397 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 95397 ']' 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 95397 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95397 00:26:00.260 killing process with pid 95397 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95397' 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 95397 00:26:00.260 09:51:38 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 95397 00:26:04.463 09:51:41 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:08.643 262144+0 records in 00:26:08.643 262144+0 records out 00:26:08.643 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.07312 s, 264 MB/s 00:26:08.643 09:51:45 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:10.020 09:51:47 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:10.020 [2024-07-24 09:51:47.770698] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:26:10.020 [2024-07-24 09:51:47.770821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95596 ] 00:26:10.280 [2024-07-24 09:51:47.938633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.280 [2024-07-24 09:51:47.982318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.280 [2024-07-24 09:51:48.083498] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.280 [2024-07-24 09:51:48.083575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.540 [2024-07-24 09:51:48.242487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.242564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.540 [2024-07-24 09:51:48.242595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:10.540 [2024-07-24 09:51:48.242610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.242706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.242728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.540 [2024-07-24 09:51:48.242744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:10.540 [2024-07-24 09:51:48.242757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.242794] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.540 [2024-07-24 09:51:48.243070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.540 [2024-07-24 09:51:48.243102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.243117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.540 [2024-07-24 09:51:48.243140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:26:10.540 [2024-07-24 09:51:48.243166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.244853] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.540 [2024-07-24 09:51:48.247869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.247922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.540 [2024-07-24 09:51:48.247941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:26:10.540 [2024-07-24 09:51:48.247955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.248033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.248053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.540 [2024-07-24 09:51:48.248069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:10.540 [2024-07-24 09:51:48.248092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.256053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.256257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.540 [2024-07-24 09:51:48.256400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.891 ms 00:26:10.540 [2024-07-24 09:51:48.256422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.256547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.256565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.540 [2024-07-24 09:51:48.256585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:10.540 [2024-07-24 09:51:48.256599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.256657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.256679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.540 [2024-07-24 09:51:48.256696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:10.540 [2024-07-24 09:51:48.256710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.256754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.540 [2024-07-24 09:51:48.258969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.259030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.540 [2024-07-24 09:51:48.259054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:26:10.540 [2024-07-24 09:51:48.259073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.259127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.259143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.540 [2024-07-24 09:51:48.259157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:10.540 [2024-07-24 09:51:48.259171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.259252] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.540 [2024-07-24 09:51:48.259288] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.540 [2024-07-24 09:51:48.259335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.540 [2024-07-24 09:51:48.259370] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:10.540 [2024-07-24 09:51:48.259467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.540 [2024-07-24 09:51:48.259485] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.540 [2024-07-24 09:51:48.259502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:10.540 [2024-07-24 09:51:48.259518] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.540 [2024-07-24 09:51:48.259533] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.540 [2024-07-24 09:51:48.259547] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.540 [2024-07-24 09:51:48.259560] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.540 [2024-07-24 09:51:48.259573] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.540 [2024-07-24 09:51:48.259595] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.540 [2024-07-24 09:51:48.259610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.259629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.540 [2024-07-24 09:51:48.259644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:26:10.540 [2024-07-24 09:51:48.259657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.259741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.540 [2024-07-24 09:51:48.259762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.540 [2024-07-24 09:51:48.259785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:10.540 [2024-07-24 09:51:48.259797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.540 [2024-07-24 09:51:48.259889] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.540 [2024-07-24 09:51:48.259915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.540 [2024-07-24 09:51:48.259934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.540 [2024-07-24 09:51:48.259948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.540 [2024-07-24 09:51:48.259960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.540 [2024-07-24 09:51:48.259974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.259989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.541 [2024-07-24 09:51:48.260015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.541 [2024-07-24 09:51:48.260053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.541 [2024-07-24 09:51:48.260067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.541 [2024-07-24 09:51:48.260080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.541 [2024-07-24 09:51:48.260098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.541 [2024-07-24 09:51:48.260112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.541 [2024-07-24 09:51:48.260128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.541 [2024-07-24 09:51:48.260154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.541 [2024-07-24 09:51:48.260211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.541 [2024-07-24 09:51:48.260250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.541 [2024-07-24 09:51:48.260290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.541 [2024-07-24 09:51:48.260339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.541 [2024-07-24 09:51:48.260379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.541 [2024-07-24 09:51:48.260404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.541 [2024-07-24 09:51:48.260417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.541 [2024-07-24 09:51:48.260430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.541 [2024-07-24 09:51:48.260445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.541 [2024-07-24 09:51:48.260458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.541 [2024-07-24 09:51:48.260471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.541 [2024-07-24 09:51:48.260497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.541 [2024-07-24 09:51:48.260510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260522] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.541 [2024-07-24 09:51:48.260536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.541 [2024-07-24 09:51:48.260553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.541 [2024-07-24 09:51:48.260582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.541 [2024-07-24 09:51:48.260595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.541 [2024-07-24 09:51:48.260607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.541 [2024-07-24 09:51:48.260619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.541 [2024-07-24 09:51:48.260631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.541 [2024-07-24 09:51:48.260644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.541 [2024-07-24 09:51:48.260658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.541 [2024-07-24 09:51:48.260675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.541 [2024-07-24 09:51:48.260706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.541 [2024-07-24 09:51:48.260721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.541 [2024-07-24 09:51:48.260737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.541 [2024-07-24 09:51:48.260751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.541 [2024-07-24 09:51:48.260766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.541 [2024-07-24 09:51:48.260785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.541 [2024-07-24 09:51:48.260799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.541 [2024-07-24 09:51:48.260814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.541 [2024-07-24 09:51:48.260829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.541 [2024-07-24 09:51:48.260914] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.541 [2024-07-24 09:51:48.260929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.541 [2024-07-24 09:51:48.260957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.541 [2024-07-24 09:51:48.260971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.541 [2024-07-24 09:51:48.260988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.541 [2024-07-24 09:51:48.261004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.261023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.541 [2024-07-24 09:51:48.261042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:26:10.541 [2024-07-24 09:51:48.261056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.283090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.283314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.541 [2024-07-24 09:51:48.283441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.982 ms 00:26:10.541 [2024-07-24 09:51:48.283491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.283631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.283778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.541 [2024-07-24 09:51:48.283833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:10.541 [2024-07-24 09:51:48.283874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.295840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.296047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.541 [2024-07-24 09:51:48.296232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.866 ms 00:26:10.541 [2024-07-24 09:51:48.296288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.296384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.296514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.541 [2024-07-24 09:51:48.296564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.541 [2024-07-24 09:51:48.296606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.297321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.297492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.541 [2024-07-24 09:51:48.297603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:26:10.541 [2024-07-24 09:51:48.297701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.541 [2024-07-24 09:51:48.297901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.541 [2024-07-24 09:51:48.297980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.542 [2024-07-24 09:51:48.298037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:26:10.542 [2024-07-24 09:51:48.298078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.305144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.305343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.542 [2024-07-24 09:51:48.305488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.018 ms 00:26:10.542 [2024-07-24 09:51:48.305538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.308613] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:10.542 [2024-07-24 09:51:48.308809] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.542 [2024-07-24 09:51:48.309033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.309088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.542 [2024-07-24 09:51:48.309131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:26:10.542 [2024-07-24 09:51:48.309253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.322723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.322910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.542 [2024-07-24 09:51:48.323078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:26:10.542 [2024-07-24 09:51:48.323238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.325748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.325922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.542 [2024-07-24 09:51:48.326026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:26:10.542 [2024-07-24 09:51:48.326078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.327714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.327867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.542 [2024-07-24 09:51:48.327961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:26:10.542 [2024-07-24 09:51:48.328014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.328513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.328564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.542 [2024-07-24 09:51:48.328583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:10.542 [2024-07-24 09:51:48.328602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.542 [2024-07-24 09:51:48.351342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.542 [2024-07-24 09:51:48.351429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:10.542 [2024-07-24 09:51:48.351452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.746 ms 00:26:10.542 [2024-07-24 09:51:48.351468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.800 [2024-07-24 09:51:48.358572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.800 [2024-07-24 09:51:48.361916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.800 [2024-07-24 09:51:48.361953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.800 [2024-07-24 09:51:48.361968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.397 ms 00:26:10.800 [2024-07-24 09:51:48.361978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.800 [2024-07-24 09:51:48.362082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.800 [2024-07-24 09:51:48.362107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.800 [2024-07-24 09:51:48.362122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:10.800 [2024-07-24 09:51:48.362132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.800 [2024-07-24 09:51:48.362226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.800 [2024-07-24 09:51:48.362239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.800 [2024-07-24 09:51:48.362250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:10.800 [2024-07-24 09:51:48.362260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.800 [2024-07-24 09:51:48.362283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.800 [2024-07-24 09:51:48.362293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.800 [2024-07-24 09:51:48.362303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.800 [2024-07-24 09:51:48.362313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.801 [2024-07-24 09:51:48.362349] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.801 [2024-07-24 09:51:48.362360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.801 [2024-07-24 09:51:48.362373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.801 [2024-07-24 09:51:48.362386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:10.801 [2024-07-24 09:51:48.362396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.801 [2024-07-24 09:51:48.366384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.801 [2024-07-24 09:51:48.366423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:10.801 [2024-07-24 09:51:48.366437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:26:10.801 [2024-07-24 09:51:48.366448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.801 [2024-07-24 09:51:48.366524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.801 [2024-07-24 09:51:48.366537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:10.801 [2024-07-24 09:51:48.366553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:10.801 [2024-07-24 09:51:48.366563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.801 [2024-07-24 09:51:48.367676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.979 ms, result 0 00:26:44.918  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 139/1024 [MB] (29 MBps) Copying: 167/1024 [MB] (28 MBps) Copying: 195/1024 [MB] (27 MBps) Copying: 223/1024 [MB] (28 MBps) Copying: 252/1024 [MB] (28 MBps) Copying: 279/1024 [MB] (27 MBps) Copying: 311/1024 [MB] (31 MBps) Copying: 342/1024 [MB] (31 MBps) Copying: 374/1024 [MB] (31 MBps) Copying: 407/1024 [MB] (32 MBps) Copying: 442/1024 [MB] (34 MBps) Copying: 470/1024 [MB] (27 MBps) Copying: 497/1024 [MB] (27 MBps) Copying: 526/1024 [MB] (28 MBps) Copying: 557/1024 [MB] (31 MBps) Copying: 587/1024 [MB] (30 MBps) Copying: 617/1024 [MB] (29 MBps) Copying: 646/1024 [MB] (28 MBps) Copying: 673/1024 [MB] (27 MBps) Copying: 710/1024 [MB] (36 MBps) Copying: 744/1024 [MB] (34 MBps) Copying: 775/1024 [MB] (30 MBps) Copying: 806/1024 [MB] (30 MBps) Copying: 837/1024 [MB] (30 MBps) Copying: 865/1024 [MB] (28 MBps) Copying: 894/1024 [MB] (28 MBps) Copying: 922/1024 [MB] (27 MBps) Copying: 950/1024 [MB] (27 MBps) Copying: 980/1024 [MB] (29 MBps) Copying: 1015/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 09:52:22.580136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.918 [2024-07-24 09:52:22.580217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:44.918 [2024-07-24 09:52:22.580251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:44.918 [2024-07-24 09:52:22.580263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.918 [2024-07-24 09:52:22.580287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:44.918 [2024-07-24 09:52:22.580974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.918 [2024-07-24 09:52:22.580988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:44.918 [2024-07-24 09:52:22.581000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:26:44.918 [2024-07-24 09:52:22.581010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.918 [2024-07-24 09:52:22.582683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.918 [2024-07-24 09:52:22.582730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:44.918 [2024-07-24 09:52:22.582745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:26:44.918 [2024-07-24 09:52:22.582756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.918 [2024-07-24 09:52:22.582785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.918 [2024-07-24 09:52:22.582797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:44.918 [2024-07-24 09:52:22.582809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:44.918 [2024-07-24 09:52:22.582819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.918 [2024-07-24 09:52:22.582867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.918 [2024-07-24 09:52:22.582878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:44.918 [2024-07-24 09:52:22.582889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:44.918 [2024-07-24 09:52:22.582903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.918 [2024-07-24 09:52:22.582919] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:44.918 [2024-07-24 09:52:22.582944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.582957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.582970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.582981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.582993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.918 [2024-07-24 09:52:22.583097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.583993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:44.919 [2024-07-24 09:52:22.584649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:44.920 [2024-07-24 09:52:22.584660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:44.920 [2024-07-24 09:52:22.584671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:44.920 [2024-07-24 09:52:22.584683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:44.920 [2024-07-24 09:52:22.584694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:44.920 [2024-07-24 09:52:22.584713] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:44.920 [2024-07-24 09:52:22.584724] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5933179e-c109-4606-8639-197609436d36 00:26:44.920 [2024-07-24 09:52:22.584736] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:44.920 [2024-07-24 09:52:22.584746] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:44.920 [2024-07-24 09:52:22.584757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:44.920 [2024-07-24 09:52:22.584769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:44.920 [2024-07-24 09:52:22.584779] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:44.920 [2024-07-24 09:52:22.584790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:44.920 [2024-07-24 09:52:22.584806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:44.920 [2024-07-24 09:52:22.584815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:44.920 [2024-07-24 09:52:22.584825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:44.920 [2024-07-24 09:52:22.584837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.920 [2024-07-24 09:52:22.584860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:44.920 [2024-07-24 09:52:22.584872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.921 ms 00:26:44.920 [2024-07-24 09:52:22.584892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.586614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.920 [2024-07-24 09:52:22.586635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:44.920 [2024-07-24 09:52:22.586647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:26:44.920 [2024-07-24 09:52:22.586658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.586767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.920 [2024-07-24 09:52:22.586778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:44.920 [2024-07-24 09:52:22.586790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:44.920 [2024-07-24 09:52:22.586800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.593073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.593219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.920 [2024-07-24 09:52:22.593354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.593403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.593526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.593574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.920 [2024-07-24 09:52:22.593685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.593725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.593811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.593901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.920 [2024-07-24 09:52:22.593942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.593955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.593983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.593995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.920 [2024-07-24 09:52:22.594005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.594016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.607433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.607671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.920 [2024-07-24 09:52:22.607750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.607798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.616269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.616448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.920 [2024-07-24 09:52:22.616523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.616559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.616642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.616676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.920 [2024-07-24 09:52:22.616723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.616791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.616849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.616899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.920 [2024-07-24 09:52:22.616942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.616972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.617057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.617134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.920 [2024-07-24 09:52:22.617228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.617277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.617382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.617431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:44.920 [2024-07-24 09:52:22.617471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.617502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.617620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.617659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.920 [2024-07-24 09:52:22.617691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.617722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.617786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.920 [2024-07-24 09:52:22.617868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.920 [2024-07-24 09:52:22.618011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.920 [2024-07-24 09:52:22.618026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.920 [2024-07-24 09:52:22.618184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.065 ms, result 0 00:26:45.853 00:26:45.853 00:26:45.853 09:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:45.853 [2024-07-24 09:52:23.427438] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:26:45.853 [2024-07-24 09:52:23.427723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95959 ] 00:26:45.853 [2024-07-24 09:52:23.593872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.853 [2024-07-24 09:52:23.640455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.111 [2024-07-24 09:52:23.742302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.111 [2024-07-24 09:52:23.742565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.111 [2024-07-24 09:52:23.900911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.900973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:46.111 [2024-07-24 09:52:23.900988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:46.111 [2024-07-24 09:52:23.900999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.111 [2024-07-24 09:52:23.901050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.901065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.111 [2024-07-24 09:52:23.901075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:46.111 [2024-07-24 09:52:23.901086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.111 [2024-07-24 09:52:23.901106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:46.111 [2024-07-24 09:52:23.901365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:46.111 [2024-07-24 09:52:23.901388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.901398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.111 [2024-07-24 09:52:23.901409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:46.111 [2024-07-24 09:52:23.901422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.111 [2024-07-24 09:52:23.901758] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:46.111 [2024-07-24 09:52:23.901784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.901798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:46.111 [2024-07-24 09:52:23.901809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:46.111 [2024-07-24 09:52:23.901819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.111 [2024-07-24 09:52:23.901877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.901887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:46.111 [2024-07-24 09:52:23.901897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:46.111 [2024-07-24 09:52:23.901907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.111 [2024-07-24 09:52:23.902282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.111 [2024-07-24 09:52:23.902301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.112 [2024-07-24 09:52:23.902312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:26:46.112 [2024-07-24 09:52:23.902321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.902406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.902419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.112 [2024-07-24 09:52:23.902436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:46.112 [2024-07-24 09:52:23.902449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.902486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.902497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:46.112 [2024-07-24 09:52:23.902511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:46.112 [2024-07-24 09:52:23.902520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.902542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:46.112 [2024-07-24 09:52:23.904452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.904503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.112 [2024-07-24 09:52:23.904546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.917 ms 00:26:46.112 [2024-07-24 09:52:23.904578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.904630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.904763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:46.112 [2024-07-24 09:52:23.904798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:46.112 [2024-07-24 09:52:23.904826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.904869] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:46.112 [2024-07-24 09:52:23.905004] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:46.112 [2024-07-24 09:52:23.905081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:46.112 [2024-07-24 09:52:23.905141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:46.112 [2024-07-24 09:52:23.905409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:46.112 [2024-07-24 09:52:23.905468] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:46.112 [2024-07-24 09:52:23.905562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:46.112 [2024-07-24 09:52:23.905616] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:46.112 [2024-07-24 09:52:23.905663] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:46.112 [2024-07-24 09:52:23.905744] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:46.112 [2024-07-24 09:52:23.905774] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:46.112 [2024-07-24 09:52:23.905809] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:46.112 [2024-07-24 09:52:23.905838] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:46.112 [2024-07-24 09:52:23.905868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.905896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:46.112 [2024-07-24 09:52:23.905994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:26:46.112 [2024-07-24 09:52:23.906060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.906172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.112 [2024-07-24 09:52:23.906221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:46.112 [2024-07-24 09:52:23.906251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:46.112 [2024-07-24 09:52:23.906343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.112 [2024-07-24 09:52:23.906483] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:46.112 [2024-07-24 09:52:23.906580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:46.112 [2024-07-24 09:52:23.906615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.112 [2024-07-24 09:52:23.906692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.906734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:46.112 [2024-07-24 09:52:23.906763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.906832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:46.112 [2024-07-24 09:52:23.906865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:46.112 [2024-07-24 09:52:23.906893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:46.112 [2024-07-24 09:52:23.906952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.112 [2024-07-24 09:52:23.906985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:46.112 [2024-07-24 09:52:23.907013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:46.112 [2024-07-24 09:52:23.907088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.112 [2024-07-24 09:52:23.907120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:46.112 [2024-07-24 09:52:23.907149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:46.112 [2024-07-24 09:52:23.907178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:46.112 [2024-07-24 09:52:23.907280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:46.112 [2024-07-24 09:52:23.907310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:46.112 [2024-07-24 09:52:23.907412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.112 [2024-07-24 09:52:23.907469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:46.112 [2024-07-24 09:52:23.907524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.112 [2024-07-24 09:52:23.907619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:46.112 [2024-07-24 09:52:23.907677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.112 [2024-07-24 09:52:23.907737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:46.112 [2024-07-24 09:52:23.907765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.112 [2024-07-24 09:52:23.907860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:46.112 [2024-07-24 09:52:23.907889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:46.112 [2024-07-24 09:52:23.907916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.112 [2024-07-24 09:52:23.907970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:46.112 [2024-07-24 09:52:23.908002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:46.112 [2024-07-24 09:52:23.908073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.112 [2024-07-24 09:52:23.908106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:46.112 [2024-07-24 09:52:23.908135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:46.112 [2024-07-24 09:52:23.908204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.908240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:46.112 [2024-07-24 09:52:23.908269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:46.112 [2024-07-24 09:52:23.908298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.908363] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:46.112 [2024-07-24 09:52:23.908375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:46.112 [2024-07-24 09:52:23.908385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.112 [2024-07-24 09:52:23.908395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.112 [2024-07-24 09:52:23.908405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:46.112 [2024-07-24 09:52:23.908415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:46.112 [2024-07-24 09:52:23.908424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:46.112 [2024-07-24 09:52:23.908433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:46.112 [2024-07-24 09:52:23.908442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:46.112 [2024-07-24 09:52:23.908456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:46.112 [2024-07-24 09:52:23.908467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:46.112 [2024-07-24 09:52:23.908479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.112 [2024-07-24 09:52:23.908504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:46.112 [2024-07-24 09:52:23.908515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:46.112 [2024-07-24 09:52:23.908525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:46.113 [2024-07-24 09:52:23.908535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:46.113 [2024-07-24 09:52:23.908545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:46.113 [2024-07-24 09:52:23.908555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:46.113 [2024-07-24 09:52:23.908565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:46.113 [2024-07-24 09:52:23.908575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:46.113 [2024-07-24 09:52:23.908585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:46.113 [2024-07-24 09:52:23.908596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:46.113 [2024-07-24 09:52:23.908650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:46.113 [2024-07-24 09:52:23.908661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:46.113 [2024-07-24 09:52:23.908690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:46.113 [2024-07-24 09:52:23.908700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:46.113 [2024-07-24 09:52:23.908711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:46.113 [2024-07-24 09:52:23.908723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.113 [2024-07-24 09:52:23.908733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:46.113 [2024-07-24 09:52:23.908743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:26:46.113 [2024-07-24 09:52:23.908756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.113 [2024-07-24 09:52:23.926289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.113 [2024-07-24 09:52:23.926341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.113 [2024-07-24 09:52:23.926368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.511 ms 00:26:46.113 [2024-07-24 09:52:23.926381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.113 [2024-07-24 09:52:23.926476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.113 [2024-07-24 09:52:23.926500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:46.113 [2024-07-24 09:52:23.926513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:46.113 [2024-07-24 09:52:23.926526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.371 [2024-07-24 09:52:23.937412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.371 [2024-07-24 09:52:23.937457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.371 [2024-07-24 09:52:23.937475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.830 ms 00:26:46.371 [2024-07-24 09:52:23.937485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.371 [2024-07-24 09:52:23.937527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.937541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.372 [2024-07-24 09:52:23.937552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:46.372 [2024-07-24 09:52:23.937567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.937670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.937684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.372 [2024-07-24 09:52:23.937695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:46.372 [2024-07-24 09:52:23.937705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.937818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.937837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.372 [2024-07-24 09:52:23.937848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:46.372 [2024-07-24 09:52:23.937858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.943844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.943892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.372 [2024-07-24 09:52:23.943905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.970 ms 00:26:46.372 [2024-07-24 09:52:23.943916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.944040] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:46.372 [2024-07-24 09:52:23.944057] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:46.372 [2024-07-24 09:52:23.944070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.944086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:46.372 [2024-07-24 09:52:23.944104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:46.372 [2024-07-24 09:52:23.944122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.954749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.954780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:46.372 [2024-07-24 09:52:23.954793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.626 ms 00:26:46.372 [2024-07-24 09:52:23.954817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.954926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.954941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:46.372 [2024-07-24 09:52:23.954952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:46.372 [2024-07-24 09:52:23.954965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.955018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.955029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:46.372 [2024-07-24 09:52:23.955040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:46.372 [2024-07-24 09:52:23.955057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.955361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.955377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:46.372 [2024-07-24 09:52:23.955388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:26:46.372 [2024-07-24 09:52:23.955398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.955429] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:46.372 [2024-07-24 09:52:23.955441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.955455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:46.372 [2024-07-24 09:52:23.955466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:46.372 [2024-07-24 09:52:23.955475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.962839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:46.372 [2024-07-24 09:52:23.963019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.963038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:46.372 [2024-07-24 09:52:23.963050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.537 ms 00:26:46.372 [2024-07-24 09:52:23.963060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.965121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.965158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:46.372 [2024-07-24 09:52:23.965170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:26:46.372 [2024-07-24 09:52:23.965179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.965274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.965290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:46.372 [2024-07-24 09:52:23.965301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:46.372 [2024-07-24 09:52:23.965310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.965359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.965370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:46.372 [2024-07-24 09:52:23.965380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:46.372 [2024-07-24 09:52:23.965390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.965424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:46.372 [2024-07-24 09:52:23.965436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.965446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:46.372 [2024-07-24 09:52:23.965468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:46.372 [2024-07-24 09:52:23.965480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.969571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.969607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:46.372 [2024-07-24 09:52:23.969621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.077 ms 00:26:46.372 [2024-07-24 09:52:23.969630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.969702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.372 [2024-07-24 09:52:23.969716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:46.372 [2024-07-24 09:52:23.969726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:46.372 [2024-07-24 09:52:23.969740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.372 [2024-07-24 09:52:23.970761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 69.570 ms, result 0 00:27:19.902  Copying: 30/1024 [MB] (30 MBps) Copying: 64/1024 [MB] (33 MBps) Copying: 101/1024 [MB] (37 MBps) Copying: 139/1024 [MB] (37 MBps) Copying: 171/1024 [MB] (32 MBps) Copying: 201/1024 [MB] (29 MBps) Copying: 231/1024 [MB] (29 MBps) Copying: 261/1024 [MB] (29 MBps) Copying: 291/1024 [MB] (30 MBps) Copying: 322/1024 [MB] (30 MBps) Copying: 353/1024 [MB] (31 MBps) Copying: 383/1024 [MB] (29 MBps) Copying: 413/1024 [MB] (30 MBps) Copying: 443/1024 [MB] (29 MBps) Copying: 472/1024 [MB] (29 MBps) Copying: 501/1024 [MB] (29 MBps) Copying: 530/1024 [MB] (28 MBps) Copying: 562/1024 [MB] (31 MBps) Copying: 592/1024 [MB] (29 MBps) Copying: 619/1024 [MB] (27 MBps) Copying: 648/1024 [MB] (28 MBps) Copying: 678/1024 [MB] (29 MBps) Copying: 707/1024 [MB] (29 MBps) Copying: 738/1024 [MB] (31 MBps) Copying: 769/1024 [MB] (31 MBps) Copying: 799/1024 [MB] (30 MBps) Copying: 830/1024 [MB] (30 MBps) Copying: 861/1024 [MB] (30 MBps) Copying: 892/1024 [MB] (30 MBps) Copying: 921/1024 [MB] (29 MBps) Copying: 950/1024 [MB] (29 MBps) Copying: 981/1024 [MB] (30 MBps) Copying: 1012/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-24 09:52:57.553281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.902 [2024-07-24 09:52:57.553368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:19.902 [2024-07-24 09:52:57.553403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.902 [2024-07-24 09:52:57.553422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.902 [2024-07-24 09:52:57.553460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:19.902 [2024-07-24 09:52:57.554465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.902 [2024-07-24 09:52:57.554491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:19.902 [2024-07-24 09:52:57.554509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:27:19.902 [2024-07-24 09:52:57.554541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.902 [2024-07-24 09:52:57.554837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.902 [2024-07-24 09:52:57.554856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:19.902 [2024-07-24 09:52:57.554882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:27:19.902 [2024-07-24 09:52:57.554898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.902 [2024-07-24 09:52:57.555172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.902 [2024-07-24 09:52:57.555214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:19.902 [2024-07-24 09:52:57.555232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:19.902 [2024-07-24 09:52:57.555248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.902 [2024-07-24 09:52:57.555328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.902 [2024-07-24 09:52:57.555347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:19.902 [2024-07-24 09:52:57.555365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:19.902 [2024-07-24 09:52:57.555387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.902 [2024-07-24 09:52:57.555425] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:19.902 [2024-07-24 09:52:57.555460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.555989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:19.902 [2024-07-24 09:52:57.556855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.556873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:19.903 [2024-07-24 09:52:57.557678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:19.903 [2024-07-24 09:52:57.557695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5933179e-c109-4606-8639-197609436d36 00:27:19.903 [2024-07-24 09:52:57.557713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:19.903 [2024-07-24 09:52:57.557729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:19.903 [2024-07-24 09:52:57.557745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:19.903 [2024-07-24 09:52:57.557761] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:19.903 [2024-07-24 09:52:57.557776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:19.903 [2024-07-24 09:52:57.557794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:19.903 [2024-07-24 09:52:57.557810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:19.903 [2024-07-24 09:52:57.557825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:19.903 [2024-07-24 09:52:57.557840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:19.903 [2024-07-24 09:52:57.557856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.903 [2024-07-24 09:52:57.557880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:19.903 [2024-07-24 09:52:57.557907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:27:19.903 [2024-07-24 09:52:57.557925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.560505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.903 [2024-07-24 09:52:57.560550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:19.903 [2024-07-24 09:52:57.560571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:27:19.903 [2024-07-24 09:52:57.560587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.560726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.903 [2024-07-24 09:52:57.560749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:19.903 [2024-07-24 09:52:57.560767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:19.903 [2024-07-24 09:52:57.560783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.567507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.567642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.903 [2024-07-24 09:52:57.567731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.567771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.567860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.567908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.903 [2024-07-24 09:52:57.567942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.567974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.568136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.568181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.903 [2024-07-24 09:52:57.568249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.568282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.568421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.568457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.903 [2024-07-24 09:52:57.568491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.568599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.581913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.582117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.903 [2024-07-24 09:52:57.582202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.582239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.591727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.591885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.903 [2024-07-24 09:52:57.591968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.592002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.592081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.592114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.903 [2024-07-24 09:52:57.592145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.592173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.592272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.592413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.903 [2024-07-24 09:52:57.592482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.592511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.592593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.592629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.903 [2024-07-24 09:52:57.592659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.592687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.592748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.903 [2024-07-24 09:52:57.592825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:19.903 [2024-07-24 09:52:57.592914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.903 [2024-07-24 09:52:57.592944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.903 [2024-07-24 09:52:57.593002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.904 [2024-07-24 09:52:57.593033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.904 [2024-07-24 09:52:57.593063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.904 [2024-07-24 09:52:57.593104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.904 [2024-07-24 09:52:57.593255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.904 [2024-07-24 09:52:57.593300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.904 [2024-07-24 09:52:57.593335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.904 [2024-07-24 09:52:57.593364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.904 [2024-07-24 09:52:57.593564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.326 ms, result 0 00:27:20.163 00:27:20.163 00:27:20.163 09:52:57 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:22.064 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:22.064 09:52:59 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:22.064 [2024-07-24 09:52:59.649059] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:27:22.064 [2024-07-24 09:52:59.649207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96322 ] 00:27:22.064 [2024-07-24 09:52:59.817069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.064 [2024-07-24 09:52:59.862930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.323 [2024-07-24 09:52:59.964324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.323 [2024-07-24 09:52:59.964396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.323 [2024-07-24 09:53:00.122872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.122921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.323 [2024-07-24 09:53:00.122936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.323 [2024-07-24 09:53:00.122953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.123004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.123018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.323 [2024-07-24 09:53:00.123029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:22.323 [2024-07-24 09:53:00.123039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.123074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.323 [2024-07-24 09:53:00.123311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.323 [2024-07-24 09:53:00.123331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.123341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.323 [2024-07-24 09:53:00.123359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:27:22.323 [2024-07-24 09:53:00.123372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.123692] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:22.323 [2024-07-24 09:53:00.123718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.123732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.323 [2024-07-24 09:53:00.123749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:22.323 [2024-07-24 09:53:00.123759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.123807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.123824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.323 [2024-07-24 09:53:00.123834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:22.323 [2024-07-24 09:53:00.123844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.124232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.124258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.323 [2024-07-24 09:53:00.124270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:27:22.323 [2024-07-24 09:53:00.124286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.323 [2024-07-24 09:53:00.124373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.323 [2024-07-24 09:53:00.124386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.324 [2024-07-24 09:53:00.124396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:22.324 [2024-07-24 09:53:00.124409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.124436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.324 [2024-07-24 09:53:00.124447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.324 [2024-07-24 09:53:00.124460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:22.324 [2024-07-24 09:53:00.124469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.124490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.324 [2024-07-24 09:53:00.126409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.324 [2024-07-24 09:53:00.126459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.324 [2024-07-24 09:53:00.126502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:27:22.324 [2024-07-24 09:53:00.126589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.126649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.324 [2024-07-24 09:53:00.126689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.324 [2024-07-24 09:53:00.126731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:22.324 [2024-07-24 09:53:00.126798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.126870] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.324 [2024-07-24 09:53:00.127019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.324 [2024-07-24 09:53:00.127094] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.324 [2024-07-24 09:53:00.127146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:22.324 [2024-07-24 09:53:00.127279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.324 [2024-07-24 09:53:00.127338] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.324 [2024-07-24 09:53:00.127386] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:22.324 [2024-07-24 09:53:00.127444] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.324 [2024-07-24 09:53:00.127550] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.324 [2024-07-24 09:53:00.127600] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:22.324 [2024-07-24 09:53:00.127628] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.324 [2024-07-24 09:53:00.127662] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.324 [2024-07-24 09:53:00.127691] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.324 [2024-07-24 09:53:00.127757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.324 [2024-07-24 09:53:00.127844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.324 [2024-07-24 09:53:00.127929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:27:22.324 [2024-07-24 09:53:00.127969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.128065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.324 [2024-07-24 09:53:00.128097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.324 [2024-07-24 09:53:00.128126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:22.324 [2024-07-24 09:53:00.128155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.324 [2024-07-24 09:53:00.128336] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.324 [2024-07-24 09:53:00.128378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.324 [2024-07-24 09:53:00.128408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.324 [2024-07-24 09:53:00.128437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.128538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.324 [2024-07-24 09:53:00.128572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.128643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:22.324 [2024-07-24 09:53:00.128676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.324 [2024-07-24 09:53:00.128705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:22.324 [2024-07-24 09:53:00.128733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.324 [2024-07-24 09:53:00.128762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.324 [2024-07-24 09:53:00.128850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:22.324 [2024-07-24 09:53:00.128939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.324 [2024-07-24 09:53:00.128969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.324 [2024-07-24 09:53:00.128997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:22.324 [2024-07-24 09:53:00.129027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.324 [2024-07-24 09:53:00.129083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.324 [2024-07-24 09:53:00.129255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.324 [2024-07-24 09:53:00.129362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.324 [2024-07-24 09:53:00.129447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.324 [2024-07-24 09:53:00.129583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.324 [2024-07-24 09:53:00.129615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.324 [2024-07-24 09:53:00.129634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.324 [2024-07-24 09:53:00.129643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:22.324 [2024-07-24 09:53:00.129652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.324 [2024-07-24 09:53:00.129668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.324 [2024-07-24 09:53:00.129677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:22.324 [2024-07-24 09:53:00.129686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.324 [2024-07-24 09:53:00.129705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:22.324 [2024-07-24 09:53:00.129714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129723] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.324 [2024-07-24 09:53:00.129733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.324 [2024-07-24 09:53:00.129743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.324 [2024-07-24 09:53:00.129761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.324 [2024-07-24 09:53:00.129771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.324 [2024-07-24 09:53:00.129780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.324 [2024-07-24 09:53:00.129789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.324 [2024-07-24 09:53:00.129798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.324 [2024-07-24 09:53:00.129807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.324 [2024-07-24 09:53:00.129821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.324 [2024-07-24 09:53:00.129833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.324 [2024-07-24 09:53:00.129844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:22.324 [2024-07-24 09:53:00.129855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:22.324 [2024-07-24 09:53:00.129865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:22.324 [2024-07-24 09:53:00.129875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:22.324 [2024-07-24 09:53:00.129886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:22.324 [2024-07-24 09:53:00.129896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:22.325 [2024-07-24 09:53:00.129906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:22.325 [2024-07-24 09:53:00.129917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:22.325 [2024-07-24 09:53:00.129929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:22.325 [2024-07-24 09:53:00.129939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.129949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.129959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.129969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.129979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:22.325 [2024-07-24 09:53:00.129992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.325 [2024-07-24 09:53:00.130003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.130013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.325 [2024-07-24 09:53:00.130024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.325 [2024-07-24 09:53:00.130033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.325 [2024-07-24 09:53:00.130044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.325 [2024-07-24 09:53:00.130055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.325 [2024-07-24 09:53:00.130065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.325 [2024-07-24 09:53:00.130075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:27:22.325 [2024-07-24 09:53:00.130088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.584 [2024-07-24 09:53:00.147169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.584 [2024-07-24 09:53:00.147219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.584 [2024-07-24 09:53:00.147234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.052 ms 00:27:22.584 [2024-07-24 09:53:00.147244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.584 [2024-07-24 09:53:00.147325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.584 [2024-07-24 09:53:00.147336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.584 [2024-07-24 09:53:00.147356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:22.584 [2024-07-24 09:53:00.147365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.584 [2024-07-24 09:53:00.158506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.584 [2024-07-24 09:53:00.158550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.584 [2024-07-24 09:53:00.158582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.094 ms 00:27:22.584 [2024-07-24 09:53:00.158596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.584 [2024-07-24 09:53:00.158635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.584 [2024-07-24 09:53:00.158649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.584 [2024-07-24 09:53:00.158662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:22.584 [2024-07-24 09:53:00.158679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.158802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.158819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.585 [2024-07-24 09:53:00.158832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:22.585 [2024-07-24 09:53:00.158845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.158980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.158997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.585 [2024-07-24 09:53:00.159010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:27:22.585 [2024-07-24 09:53:00.159023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.164960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.164995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.585 [2024-07-24 09:53:00.165008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.918 ms 00:27:22.585 [2024-07-24 09:53:00.165018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.165129] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:22.585 [2024-07-24 09:53:00.165151] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.585 [2024-07-24 09:53:00.165163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.165177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.585 [2024-07-24 09:53:00.165206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:22.585 [2024-07-24 09:53:00.165216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.175607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.175637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.585 [2024-07-24 09:53:00.175650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.391 ms 00:27:22.585 [2024-07-24 09:53:00.175672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.175781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.175795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.585 [2024-07-24 09:53:00.175806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:22.585 [2024-07-24 09:53:00.175883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.175932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.175944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.585 [2024-07-24 09:53:00.175954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:22.585 [2024-07-24 09:53:00.175964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.176254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.176271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.585 [2024-07-24 09:53:00.176282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:27:22.585 [2024-07-24 09:53:00.176291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.176332] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:22.585 [2024-07-24 09:53:00.176344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.176357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:22.585 [2024-07-24 09:53:00.176368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:22.585 [2024-07-24 09:53:00.176377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.183740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:22.585 [2024-07-24 09:53:00.183918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.183935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:22.585 [2024-07-24 09:53:00.183946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.532 ms 00:27:22.585 [2024-07-24 09:53:00.183956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.186012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.186042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:22.585 [2024-07-24 09:53:00.186053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:27:22.585 [2024-07-24 09:53:00.186063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.186147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.186163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:22.585 [2024-07-24 09:53:00.186173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:22.585 [2024-07-24 09:53:00.186183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.186240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.186252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:22.585 [2024-07-24 09:53:00.186262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.585 [2024-07-24 09:53:00.186281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.186314] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:22.585 [2024-07-24 09:53:00.186327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.186336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:22.585 [2024-07-24 09:53:00.186349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:22.585 [2024-07-24 09:53:00.186363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.190248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.190283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:22.585 [2024-07-24 09:53:00.190296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.873 ms 00:27:22.585 [2024-07-24 09:53:00.190306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.190371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.585 [2024-07-24 09:53:00.190383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:22.585 [2024-07-24 09:53:00.190393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:22.585 [2024-07-24 09:53:00.190408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.585 [2024-07-24 09:53:00.191471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.295 ms, result 0 00:27:57.626  Copying: 27/1024 [MB] (27 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 84/1024 [MB] (28 MBps) Copying: 113/1024 [MB] (28 MBps) Copying: 144/1024 [MB] (30 MBps) Copying: 186/1024 [MB] (42 MBps) Copying: 221/1024 [MB] (35 MBps) Copying: 260/1024 [MB] (39 MBps) Copying: 296/1024 [MB] (35 MBps) Copying: 332/1024 [MB] (36 MBps) Copying: 362/1024 [MB] (29 MBps) Copying: 391/1024 [MB] (29 MBps) Copying: 421/1024 [MB] (30 MBps) Copying: 451/1024 [MB] (29 MBps) Copying: 479/1024 [MB] (28 MBps) Copying: 508/1024 [MB] (28 MBps) Copying: 537/1024 [MB] (28 MBps) Copying: 564/1024 [MB] (26 MBps) Copying: 591/1024 [MB] (26 MBps) Copying: 618/1024 [MB] (27 MBps) Copying: 644/1024 [MB] (26 MBps) Copying: 672/1024 [MB] (27 MBps) Copying: 700/1024 [MB] (28 MBps) Copying: 729/1024 [MB] (28 MBps) Copying: 756/1024 [MB] (27 MBps) Copying: 784/1024 [MB] (27 MBps) Copying: 812/1024 [MB] (27 MBps) Copying: 839/1024 [MB] (27 MBps) Copying: 867/1024 [MB] (27 MBps) Copying: 896/1024 [MB] (28 MBps) Copying: 925/1024 [MB] (28 MBps) Copying: 953/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (29 MBps) Copying: 1012/1024 [MB] (29 MBps) Copying: 1023/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 09:53:35.434107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.626 [2024-07-24 09:53:35.434174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:57.626 [2024-07-24 09:53:35.434200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.626 [2024-07-24 09:53:35.434212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.626 [2024-07-24 09:53:35.436227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.626 [2024-07-24 09:53:35.439597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.626 [2024-07-24 09:53:35.439726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:57.626 [2024-07-24 09:53:35.439801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:27:57.626 [2024-07-24 09:53:35.439848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.888 [2024-07-24 09:53:35.448092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.888 [2024-07-24 09:53:35.448244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:57.888 [2024-07-24 09:53:35.448318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.117 ms 00:27:57.888 [2024-07-24 09:53:35.448353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.888 [2024-07-24 09:53:35.448409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.888 [2024-07-24 09:53:35.448456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:57.888 [2024-07-24 09:53:35.448491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.888 [2024-07-24 09:53:35.448525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.888 [2024-07-24 09:53:35.448591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.888 [2024-07-24 09:53:35.448727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:57.888 [2024-07-24 09:53:35.448778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:57.888 [2024-07-24 09:53:35.448808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.888 [2024-07-24 09:53:35.448846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:57.888 [2024-07-24 09:53:35.448889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:27:57.888 [2024-07-24 09:53:35.448941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.448988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:57.888 [2024-07-24 09:53:35.449628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.449998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:57.889 [2024-07-24 09:53:35.450258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:57.889 [2024-07-24 09:53:35.450268] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5933179e-c109-4606-8639-197609436d36 00:27:57.889 [2024-07-24 09:53:35.450280] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:27:57.889 [2024-07-24 09:53:35.450293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130336 00:27:57.889 [2024-07-24 09:53:35.450303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:27:57.889 [2024-07-24 09:53:35.450313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:27:57.889 [2024-07-24 09:53:35.450323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:57.889 [2024-07-24 09:53:35.450333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:57.889 [2024-07-24 09:53:35.450342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:57.889 [2024-07-24 09:53:35.450351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:57.889 [2024-07-24 09:53:35.450361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:57.889 [2024-07-24 09:53:35.450371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.889 [2024-07-24 09:53:35.450381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:57.889 [2024-07-24 09:53:35.450391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:27:57.889 [2024-07-24 09:53:35.450401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.889 [2024-07-24 09:53:35.452208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.889 [2024-07-24 09:53:35.452234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:57.889 [2024-07-24 09:53:35.452244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:27:57.889 [2024-07-24 09:53:35.452254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.889 [2024-07-24 09:53:35.452362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.889 [2024-07-24 09:53:35.452374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:57.889 [2024-07-24 09:53:35.452385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:57.890 [2024-07-24 09:53:35.452398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.458404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.458433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.890 [2024-07-24 09:53:35.458445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.458463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.458513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.458525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.890 [2024-07-24 09:53:35.458535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.458544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.458599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.458612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.890 [2024-07-24 09:53:35.458629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.458639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.458660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.458671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.890 [2024-07-24 09:53:35.458688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.458698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.470911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.470956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.890 [2024-07-24 09:53:35.470969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.470979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.890 [2024-07-24 09:53:35.480420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.890 [2024-07-24 09:53:35.480520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.890 [2024-07-24 09:53:35.480595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.890 [2024-07-24 09:53:35.480688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:57.890 [2024-07-24 09:53:35.480752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.890 [2024-07-24 09:53:35.480829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.480904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.890 [2024-07-24 09:53:35.480916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.890 [2024-07-24 09:53:35.480926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.890 [2024-07-24 09:53:35.480936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.890 [2024-07-24 09:53:35.481078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 48.840 ms, result 0 00:27:58.457 00:27:58.457 00:27:58.457 09:53:36 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:58.457 [2024-07-24 09:53:36.196976] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:27:58.457 [2024-07-24 09:53:36.197112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96693 ] 00:27:58.716 [2024-07-24 09:53:36.363490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.716 [2024-07-24 09:53:36.409511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.716 [2024-07-24 09:53:36.511427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.716 [2024-07-24 09:53:36.511500] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.976 [2024-07-24 09:53:36.670479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.670548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:58.976 [2024-07-24 09:53:36.670565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.976 [2024-07-24 09:53:36.670575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.670627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.670642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.976 [2024-07-24 09:53:36.670653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:58.976 [2024-07-24 09:53:36.670663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.670696] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:58.976 [2024-07-24 09:53:36.670923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:58.976 [2024-07-24 09:53:36.670941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.670960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.976 [2024-07-24 09:53:36.670971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:58.976 [2024-07-24 09:53:36.670980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.671303] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:58.976 [2024-07-24 09:53:36.671330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.671353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:58.976 [2024-07-24 09:53:36.671365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:58.976 [2024-07-24 09:53:36.671374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.671514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.671533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:58.976 [2024-07-24 09:53:36.671544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:58.976 [2024-07-24 09:53:36.671554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.671934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.671956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.976 [2024-07-24 09:53:36.671967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:27:58.976 [2024-07-24 09:53:36.671988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.672080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.672098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.976 [2024-07-24 09:53:36.672109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:58.976 [2024-07-24 09:53:36.672205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.672233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.672244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:58.976 [2024-07-24 09:53:36.672258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:58.976 [2024-07-24 09:53:36.672267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.672289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:58.976 [2024-07-24 09:53:36.674112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.674141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.976 [2024-07-24 09:53:36.674153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:27:58.976 [2024-07-24 09:53:36.674163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.674219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.674231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:58.976 [2024-07-24 09:53:36.674245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:58.976 [2024-07-24 09:53:36.674254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.674277] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:58.976 [2024-07-24 09:53:36.674299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:58.976 [2024-07-24 09:53:36.674333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:58.976 [2024-07-24 09:53:36.674350] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:58.976 [2024-07-24 09:53:36.674432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:58.976 [2024-07-24 09:53:36.674449] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:58.976 [2024-07-24 09:53:36.674462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:58.976 [2024-07-24 09:53:36.674474] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:58.976 [2024-07-24 09:53:36.674486] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:58.976 [2024-07-24 09:53:36.674497] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:58.976 [2024-07-24 09:53:36.674507] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:58.976 [2024-07-24 09:53:36.674517] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:58.976 [2024-07-24 09:53:36.674529] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:58.976 [2024-07-24 09:53:36.674539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.674549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:58.976 [2024-07-24 09:53:36.674559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:27:58.976 [2024-07-24 09:53:36.674572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.674651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.976 [2024-07-24 09:53:36.674663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:58.976 [2024-07-24 09:53:36.674673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:58.976 [2024-07-24 09:53:36.674682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.976 [2024-07-24 09:53:36.674759] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:58.976 [2024-07-24 09:53:36.674771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:58.977 [2024-07-24 09:53:36.674781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.977 [2024-07-24 09:53:36.674800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:58.977 [2024-07-24 09:53:36.674824] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:58.977 [2024-07-24 09:53:36.674846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:58.977 [2024-07-24 09:53:36.674856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.977 [2024-07-24 09:53:36.674875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:58.977 [2024-07-24 09:53:36.674884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:58.977 [2024-07-24 09:53:36.674901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.977 [2024-07-24 09:53:36.674911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:58.977 [2024-07-24 09:53:36.674920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:58.977 [2024-07-24 09:53:36.674929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:58.977 [2024-07-24 09:53:36.674947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:58.977 [2024-07-24 09:53:36.674957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:58.977 [2024-07-24 09:53:36.674976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:58.977 [2024-07-24 09:53:36.674985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.977 [2024-07-24 09:53:36.674995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:58.977 [2024-07-24 09:53:36.675007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.977 [2024-07-24 09:53:36.675028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:58.977 [2024-07-24 09:53:36.675037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.977 [2024-07-24 09:53:36.675055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:58.977 [2024-07-24 09:53:36.675065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.977 [2024-07-24 09:53:36.675083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:58.977 [2024-07-24 09:53:36.675092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.977 [2024-07-24 09:53:36.675110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:58.977 [2024-07-24 09:53:36.675119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:58.977 [2024-07-24 09:53:36.675128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.977 [2024-07-24 09:53:36.675137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:58.977 [2024-07-24 09:53:36.675146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:58.977 [2024-07-24 09:53:36.675160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:58.977 [2024-07-24 09:53:36.675179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:58.977 [2024-07-24 09:53:36.675200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675210] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:58.977 [2024-07-24 09:53:36.675220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:58.977 [2024-07-24 09:53:36.675230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.977 [2024-07-24 09:53:36.675240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.977 [2024-07-24 09:53:36.675250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:58.977 [2024-07-24 09:53:36.675259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:58.977 [2024-07-24 09:53:36.675268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:58.977 [2024-07-24 09:53:36.675278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:58.977 [2024-07-24 09:53:36.675287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:58.977 [2024-07-24 09:53:36.675297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:58.977 [2024-07-24 09:53:36.675307] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:58.977 [2024-07-24 09:53:36.675318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:58.977 [2024-07-24 09:53:36.675344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:58.977 [2024-07-24 09:53:36.675355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:58.977 [2024-07-24 09:53:36.675365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:58.977 [2024-07-24 09:53:36.675376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:58.977 [2024-07-24 09:53:36.675386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:58.977 [2024-07-24 09:53:36.675397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:58.977 [2024-07-24 09:53:36.675407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:58.977 [2024-07-24 09:53:36.675417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:58.977 [2024-07-24 09:53:36.675428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:58.977 [2024-07-24 09:53:36.675479] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:58.977 [2024-07-24 09:53:36.675489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.977 [2024-07-24 09:53:36.675515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:58.977 [2024-07-24 09:53:36.675525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:58.977 [2024-07-24 09:53:36.675535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:58.977 [2024-07-24 09:53:36.675546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.977 [2024-07-24 09:53:36.675556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:58.977 [2024-07-24 09:53:36.675566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:27:58.977 [2024-07-24 09:53:36.675579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.695287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.695477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.978 [2024-07-24 09:53:36.695629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.682 ms 00:27:58.978 [2024-07-24 09:53:36.695686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.695820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.695910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.978 [2024-07-24 09:53:36.695989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:58.978 [2024-07-24 09:53:36.696045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.707488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.707662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.978 [2024-07-24 09:53:36.707816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.350 ms 00:27:58.978 [2024-07-24 09:53:36.707865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.707963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.708104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.978 [2024-07-24 09:53:36.708179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:58.978 [2024-07-24 09:53:36.708259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.708441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.708639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.978 [2024-07-24 09:53:36.708698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:58.978 [2024-07-24 09:53:36.708740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.708994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.709055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.978 [2024-07-24 09:53:36.709277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:58.978 [2024-07-24 09:53:36.709331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.715534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.715666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.978 [2024-07-24 09:53:36.715748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:27:58.978 [2024-07-24 09:53:36.715792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.715929] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:58.978 [2024-07-24 09:53:36.715996] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:58.978 [2024-07-24 09:53:36.716114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.716162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:58.978 [2024-07-24 09:53:36.716295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:27:58.978 [2024-07-24 09:53:36.716362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.726850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.726999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:58.978 [2024-07-24 09:53:36.727080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.462 ms 00:27:58.978 [2024-07-24 09:53:36.727117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.727268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.727312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:58.978 [2024-07-24 09:53:36.727401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:58.978 [2024-07-24 09:53:36.727447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.727524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.727563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:58.978 [2024-07-24 09:53:36.727668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:58.978 [2024-07-24 09:53:36.727736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.728027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.728078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.978 [2024-07-24 09:53:36.728168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:27:58.978 [2024-07-24 09:53:36.728223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.728275] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:58.978 [2024-07-24 09:53:36.728436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.728468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:58.978 [2024-07-24 09:53:36.728498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:27:58.978 [2024-07-24 09:53:36.728532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.735810] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:58.978 [2024-07-24 09:53:36.736096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.736235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.978 [2024-07-24 09:53:36.736264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.535 ms 00:27:58.978 [2024-07-24 09:53:36.736275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.738418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.738443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:58.978 [2024-07-24 09:53:36.738454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:27:58.978 [2024-07-24 09:53:36.738465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.738521] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:58.978 [2024-07-24 09:53:36.739096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.978 [2024-07-24 09:53:36.739118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.978 [2024-07-24 09:53:36.739131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:27:58.978 [2024-07-24 09:53:36.739154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.978 [2024-07-24 09:53:36.739206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.979 [2024-07-24 09:53:36.739218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.979 [2024-07-24 09:53:36.739229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:58.979 [2024-07-24 09:53:36.739239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.979 [2024-07-24 09:53:36.739271] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:58.979 [2024-07-24 09:53:36.739283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.979 [2024-07-24 09:53:36.739296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:58.979 [2024-07-24 09:53:36.739306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:58.979 [2024-07-24 09:53:36.739317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.979 [2024-07-24 09:53:36.743407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.979 [2024-07-24 09:53:36.743449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.979 [2024-07-24 09:53:36.743463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:27:58.979 [2024-07-24 09:53:36.743473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.979 [2024-07-24 09:53:36.743535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.979 [2024-07-24 09:53:36.743547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.979 [2024-07-24 09:53:36.743562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:58.979 [2024-07-24 09:53:36.743572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.979 [2024-07-24 09:53:36.749855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.580 ms, result 0 00:28:34.703  Copying: 31/1024 [MB] (31 MBps) Copying: 58/1024 [MB] (27 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 143/1024 [MB] (29 MBps) Copying: 173/1024 [MB] (29 MBps) Copying: 202/1024 [MB] (29 MBps) Copying: 232/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (30 MBps) Copying: 292/1024 [MB] (29 MBps) Copying: 320/1024 [MB] (28 MBps) Copying: 350/1024 [MB] (29 MBps) Copying: 379/1024 [MB] (29 MBps) Copying: 408/1024 [MB] (28 MBps) Copying: 437/1024 [MB] (28 MBps) Copying: 465/1024 [MB] (28 MBps) Copying: 493/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (29 MBps) Copying: 552/1024 [MB] (28 MBps) Copying: 581/1024 [MB] (28 MBps) Copying: 609/1024 [MB] (28 MBps) Copying: 637/1024 [MB] (28 MBps) Copying: 666/1024 [MB] (28 MBps) Copying: 695/1024 [MB] (29 MBps) Copying: 724/1024 [MB] (29 MBps) Copying: 753/1024 [MB] (28 MBps) Copying: 783/1024 [MB] (29 MBps) Copying: 814/1024 [MB] (30 MBps) Copying: 842/1024 [MB] (28 MBps) Copying: 870/1024 [MB] (28 MBps) Copying: 898/1024 [MB] (27 MBps) Copying: 928/1024 [MB] (30 MBps) Copying: 957/1024 [MB] (28 MBps) Copying: 986/1024 [MB] (29 MBps) Copying: 1019/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 09:54:12.484409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.703 [2024-07-24 09:54:12.484482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:34.703 [2024-07-24 09:54:12.484501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:34.703 [2024-07-24 09:54:12.484525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.703 [2024-07-24 09:54:12.484551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:34.703 [2024-07-24 09:54:12.485473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.703 [2024-07-24 09:54:12.485499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:34.703 [2024-07-24 09:54:12.485512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:28:34.703 [2024-07-24 09:54:12.485529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.703 [2024-07-24 09:54:12.485730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.703 [2024-07-24 09:54:12.485742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:34.703 [2024-07-24 09:54:12.485753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:28:34.703 [2024-07-24 09:54:12.485763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.703 [2024-07-24 09:54:12.485793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.703 [2024-07-24 09:54:12.485805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:34.703 [2024-07-24 09:54:12.485821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:34.703 [2024-07-24 09:54:12.485832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.703 [2024-07-24 09:54:12.485898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.703 [2024-07-24 09:54:12.485909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:34.703 [2024-07-24 09:54:12.485920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:34.703 [2024-07-24 09:54:12.485930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.703 [2024-07-24 09:54:12.485946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:34.703 [2024-07-24 09:54:12.485961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 134144 / 261120 wr_cnt: 1 state: open 00:28:34.703 [2024-07-24 09:54:12.485975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.485986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.485998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.703 [2024-07-24 09:54:12.486167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:34.704 [2024-07-24 09:54:12.486629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.486833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:34.705 [2024-07-24 09:54:12.487251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:34.706 [2024-07-24 09:54:12.487261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:34.706 [2024-07-24 09:54:12.487272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:34.706 [2024-07-24 09:54:12.487282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:34.706 [2024-07-24 09:54:12.487300] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:34.706 [2024-07-24 09:54:12.487310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5933179e-c109-4606-8639-197609436d36 00:28:34.706 [2024-07-24 09:54:12.487325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 134144 00:28:34.706 [2024-07-24 09:54:12.487334] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3872 00:28:34.706 [2024-07-24 09:54:12.487344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3840 00:28:34.706 [2024-07-24 09:54:12.487354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:28:34.706 [2024-07-24 09:54:12.487364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:34.706 [2024-07-24 09:54:12.487376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:34.706 [2024-07-24 09:54:12.487386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:34.706 [2024-07-24 09:54:12.487400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:34.706 [2024-07-24 09:54:12.487410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:34.706 [2024-07-24 09:54:12.487420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.706 [2024-07-24 09:54:12.487430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:34.706 [2024-07-24 09:54:12.487440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:28:34.706 [2024-07-24 09:54:12.487458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.489447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.706 [2024-07-24 09:54:12.489477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:34.706 [2024-07-24 09:54:12.489490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:28:34.706 [2024-07-24 09:54:12.489500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.489777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.706 [2024-07-24 09:54:12.489789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:34.706 [2024-07-24 09:54:12.489808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:28:34.706 [2024-07-24 09:54:12.489822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.496761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.706 [2024-07-24 09:54:12.497120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.706 [2024-07-24 09:54:12.497222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.706 [2024-07-24 09:54:12.497261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.497341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.706 [2024-07-24 09:54:12.497373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.706 [2024-07-24 09:54:12.497403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.706 [2024-07-24 09:54:12.497440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.497591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.706 [2024-07-24 09:54:12.497632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.706 [2024-07-24 09:54:12.497663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.706 [2024-07-24 09:54:12.497703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.497741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.706 [2024-07-24 09:54:12.497859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.706 [2024-07-24 09:54:12.497872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.706 [2024-07-24 09:54:12.497882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.706 [2024-07-24 09:54:12.511499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.706 [2024-07-24 09:54:12.511740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.706 [2024-07-24 09:54:12.511817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.706 [2024-07-24 09:54:12.511852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.966 [2024-07-24 09:54:12.521519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:34.966 [2024-07-24 09:54:12.521618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:34.966 [2024-07-24 09:54:12.521674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:34.966 [2024-07-24 09:54:12.521776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:34.966 [2024-07-24 09:54:12.521835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:34.966 [2024-07-24 09:54:12.521918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.521937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.521985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.966 [2024-07-24 09:54:12.521997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:34.966 [2024-07-24 09:54:12.522006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.966 [2024-07-24 09:54:12.522016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.966 [2024-07-24 09:54:12.522143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 37.768 ms, result 0 00:28:34.966 00:28:34.966 00:28:34.966 09:54:12 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:36.865 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:36.865 Process with pid 95397 is not found 00:28:36.865 Remove shared memory files 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95397 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 95397 ']' 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 95397 00:28:36.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (95397) - No such process 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 95397 is not found' 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:36.865 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_band_md /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_l2p_l1 /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_l2p_l2 /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_l2p_l2_ctx /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_nvc_md /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_p2l_pool /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_sb /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_sb_shm /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_trim_bitmap /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_trim_log /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_trim_md /dev/hugepages/ftl_5933179e-c109-4606-8639-197609436d36_vmap 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:36.866 ************************************ 00:28:36.866 END TEST ftl_restore_fast 00:28:36.866 ************************************ 00:28:36.866 00:28:36.866 real 2m44.540s 00:28:36.866 user 2m33.372s 00:28:36.866 sys 0m12.808s 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.866 09:54:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 09:54:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:37.123 09:54:14 ftl -- ftl/ftl.sh@14 -- # killprocess 88736 00:28:37.123 09:54:14 ftl -- common/autotest_common.sh@950 -- # '[' -z 88736 ']' 00:28:37.123 09:54:14 ftl -- common/autotest_common.sh@954 -- # kill -0 88736 00:28:37.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (88736) - No such process 00:28:37.123 Process with pid 88736 is not found 00:28:37.123 09:54:14 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 88736 is not found' 00:28:37.123 09:54:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:37.123 09:54:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97103 00:28:37.123 09:54:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:37.124 09:54:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97103 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@831 -- # '[' -z 97103 ']' 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.124 09:54:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:37.124 [2024-07-24 09:54:14.834519] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 23.11.0 initialization... 00:28:37.124 [2024-07-24 09:54:14.834649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97103 ] 00:28:37.382 [2024-07-24 09:54:15.000545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.382 [2024-07-24 09:54:15.045243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.950 09:54:15 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.950 09:54:15 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:37.950 09:54:15 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:38.207 nvme0n1 00:28:38.207 09:54:15 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:38.207 09:54:15 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.208 09:54:15 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:38.466 09:54:16 ftl -- ftl/common.sh@28 -- # stores=7aefb964-ed09-4699-beb0-804ac75da744 00:28:38.466 09:54:16 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:38.466 09:54:16 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7aefb964-ed09-4699-beb0-804ac75da744 00:28:38.467 09:54:16 ftl -- ftl/ftl.sh@23 -- # killprocess 97103 00:28:38.467 09:54:16 ftl -- common/autotest_common.sh@950 -- # '[' -z 97103 ']' 00:28:38.467 09:54:16 ftl -- common/autotest_common.sh@954 -- # kill -0 97103 00:28:38.467 09:54:16 ftl -- common/autotest_common.sh@955 -- # uname 00:28:38.467 09:54:16 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:38.467 09:54:16 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97103 00:28:38.725 killing process with pid 97103 00:28:38.725 09:54:16 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:38.725 09:54:16 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:38.725 09:54:16 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97103' 00:28:38.725 09:54:16 ftl -- common/autotest_common.sh@969 -- # kill 97103 00:28:38.725 09:54:16 ftl -- common/autotest_common.sh@974 -- # wait 97103 00:28:38.983 09:54:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:39.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:39.499 Waiting for block devices as requested 00:28:39.499 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:39.499 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:39.757 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:39.757 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.025 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:45.025 09:54:22 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:45.025 09:54:22 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:45.025 Remove shared memory files 00:28:45.025 09:54:22 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:45.025 09:54:22 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:45.025 09:54:22 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:45.025 09:54:22 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:45.025 09:54:22 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:45.025 ************************************ 00:28:45.025 END TEST ftl 00:28:45.025 ************************************ 00:28:45.025 00:28:45.025 real 12m9.471s 00:28:45.025 user 14m1.798s 00:28:45.025 sys 1m34.184s 00:28:45.025 09:54:22 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.026 09:54:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:45.026 09:54:22 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:45.026 09:54:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:45.026 09:54:22 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:45.026 09:54:22 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:45.026 09:54:22 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:45.026 09:54:22 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:45.026 09:54:22 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:45.026 09:54:22 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:45.026 09:54:22 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:45.026 09:54:22 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:45.026 09:54:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:45.026 09:54:22 -- common/autotest_common.sh@10 -- # set +x 00:28:45.026 09:54:22 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:45.026 09:54:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:45.026 09:54:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:45.026 09:54:22 -- common/autotest_common.sh@10 -- # set +x 00:28:46.925 INFO: APP EXITING 00:28:46.925 INFO: killing all VMs 00:28:46.925 INFO: killing vhost app 00:28:46.925 INFO: EXIT DONE 00:28:47.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.751 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:47.751 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:47.751 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:47.751 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:48.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.576 Cleaning 00:28:48.576 Removing: /var/run/dpdk/spdk0/config 00:28:48.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:48.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:48.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:48.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:48.835 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:48.835 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:48.835 Removing: /var/run/dpdk/spdk0 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74245 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74406 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74599 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74687 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74710 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74829 00:28:48.835 Removing: /var/run/dpdk/spdk_pid74847 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75000 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75067 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75143 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75230 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75303 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75342 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75379 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75441 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75555 00:28:48.835 Removing: /var/run/dpdk/spdk_pid75973 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76021 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76073 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76089 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76158 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76173 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76237 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76253 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76301 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76319 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76361 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76379 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76498 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76540 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76610 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76755 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76828 00:28:48.835 Removing: /var/run/dpdk/spdk_pid76859 00:28:48.835 Removing: /var/run/dpdk/spdk_pid77288 00:28:48.836 Removing: /var/run/dpdk/spdk_pid77375 00:28:48.836 Removing: /var/run/dpdk/spdk_pid77473 00:28:48.836 Removing: /var/run/dpdk/spdk_pid77515 00:28:48.836 Removing: /var/run/dpdk/spdk_pid77540 00:28:48.836 Removing: /var/run/dpdk/spdk_pid77611 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78237 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78264 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78726 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78813 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78917 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78966 00:28:48.836 Removing: /var/run/dpdk/spdk_pid78991 00:28:48.836 Removing: /var/run/dpdk/spdk_pid79017 00:28:48.836 Removing: /var/run/dpdk/spdk_pid80858 00:28:48.836 Removing: /var/run/dpdk/spdk_pid80973 00:28:48.836 Removing: /var/run/dpdk/spdk_pid80988 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81000 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81041 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81045 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81057 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81103 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81107 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81124 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81169 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81173 00:28:49.094 Removing: /var/run/dpdk/spdk_pid81185 00:28:49.094 Removing: /var/run/dpdk/spdk_pid82553 00:28:49.094 Removing: /var/run/dpdk/spdk_pid82631 00:28:49.094 Removing: /var/run/dpdk/spdk_pid84033 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85379 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85441 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85506 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85560 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85649 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85718 00:28:49.094 Removing: /var/run/dpdk/spdk_pid85847 00:28:49.094 Removing: /var/run/dpdk/spdk_pid86195 00:28:49.094 Removing: /var/run/dpdk/spdk_pid86226 00:28:49.094 Removing: /var/run/dpdk/spdk_pid86658 00:28:49.094 Removing: /var/run/dpdk/spdk_pid86831 00:28:49.094 Removing: /var/run/dpdk/spdk_pid86921 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87014 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87056 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87076 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87380 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87418 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87463 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87808 00:28:49.094 Removing: /var/run/dpdk/spdk_pid87949 00:28:49.094 Removing: /var/run/dpdk/spdk_pid88736 00:28:49.094 Removing: /var/run/dpdk/spdk_pid88845 00:28:49.094 Removing: /var/run/dpdk/spdk_pid89003 00:28:49.094 Removing: /var/run/dpdk/spdk_pid89089 00:28:49.094 Removing: /var/run/dpdk/spdk_pid89405 00:28:49.094 Removing: /var/run/dpdk/spdk_pid89641 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90013 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90197 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90312 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90351 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90469 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90483 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90519 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90703 00:28:49.094 Removing: /var/run/dpdk/spdk_pid90905 00:28:49.094 Removing: /var/run/dpdk/spdk_pid91295 00:28:49.094 Removing: /var/run/dpdk/spdk_pid91671 00:28:49.094 Removing: /var/run/dpdk/spdk_pid92060 00:28:49.094 Removing: /var/run/dpdk/spdk_pid92507 00:28:49.094 Removing: /var/run/dpdk/spdk_pid92648 00:28:49.094 Removing: /var/run/dpdk/spdk_pid92725 00:28:49.094 Removing: /var/run/dpdk/spdk_pid93291 00:28:49.094 Removing: /var/run/dpdk/spdk_pid93352 00:28:49.094 Removing: /var/run/dpdk/spdk_pid93732 00:28:49.094 Removing: /var/run/dpdk/spdk_pid94088 00:28:49.094 Removing: /var/run/dpdk/spdk_pid94527 00:28:49.094 Removing: /var/run/dpdk/spdk_pid94645 00:28:49.353 Removing: /var/run/dpdk/spdk_pid94674 00:28:49.353 Removing: /var/run/dpdk/spdk_pid94727 00:28:49.353 Removing: /var/run/dpdk/spdk_pid94773 00:28:49.353 Removing: /var/run/dpdk/spdk_pid94826 00:28:49.353 Removing: /var/run/dpdk/spdk_pid94992 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95061 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95113 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95190 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95226 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95275 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95397 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95596 00:28:49.353 Removing: /var/run/dpdk/spdk_pid95959 00:28:49.353 Removing: /var/run/dpdk/spdk_pid96322 00:28:49.353 Removing: /var/run/dpdk/spdk_pid96693 00:28:49.353 Removing: /var/run/dpdk/spdk_pid97103 00:28:49.353 Clean 00:28:49.353 09:54:27 -- common/autotest_common.sh@1451 -- # return 0 00:28:49.353 09:54:27 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:28:49.353 09:54:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.353 09:54:27 -- common/autotest_common.sh@10 -- # set +x 00:28:49.353 09:54:27 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:28:49.353 09:54:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.353 09:54:27 -- common/autotest_common.sh@10 -- # set +x 00:28:49.353 09:54:27 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:49.611 09:54:27 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:49.611 09:54:27 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:49.611 09:54:27 -- spdk/autotest.sh@395 -- # hash lcov 00:28:49.611 09:54:27 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:49.611 09:54:27 -- spdk/autotest.sh@397 -- # hostname 00:28:49.612 09:54:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:49.612 geninfo: WARNING: invalid characters removed from testname! 00:29:16.149 09:54:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.065 09:54:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:19.966 09:54:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:22.497 09:55:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:24.401 09:55:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:26.931 09:55:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:28.831 09:55:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:28.831 09:55:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:28.831 09:55:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:28.831 09:55:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.831 09:55:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.831 09:55:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.831 09:55:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.831 09:55:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.831 09:55:06 -- paths/export.sh@5 -- $ export PATH 00:29:28.831 09:55:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.831 09:55:06 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:28.831 09:55:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:28.831 09:55:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721814906.XXXXXX 00:29:28.831 09:55:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721814906.L9hyJm 00:29:28.831 09:55:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:28.831 09:55:06 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:29:28.831 09:55:06 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:28.831 09:55:06 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:28.831 09:55:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:28.831 09:55:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:28.831 09:55:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:28.831 09:55:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:29:28.831 09:55:06 -- common/autotest_common.sh@10 -- $ set +x 00:29:28.831 09:55:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:29:28.831 09:55:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:28.831 09:55:06 -- pm/common@17 -- $ local monitor 00:29:28.831 09:55:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:28.832 09:55:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:28.832 09:55:06 -- pm/common@25 -- $ sleep 1 00:29:28.832 09:55:06 -- pm/common@21 -- $ date +%s 00:29:28.832 09:55:06 -- pm/common@21 -- $ date +%s 00:29:28.832 09:55:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721814906 00:29:28.832 09:55:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721814906 00:29:28.832 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721814906_collect-cpu-load.pm.log 00:29:28.832 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721814906_collect-vmstat.pm.log 00:29:29.767 09:55:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:29.767 09:55:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:29.767 09:55:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:29.767 09:55:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:29.767 09:55:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:29.767 09:55:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:29.767 09:55:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:29.767 09:55:07 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:29.767 09:55:07 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:29.767 09:55:07 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:30.026 09:55:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:30.026 09:55:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:30.026 09:55:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:30.026 09:55:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:30.026 09:55:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:30.026 09:55:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:30.026 09:55:07 -- pm/common@44 -- $ pid=98773 00:29:30.026 09:55:07 -- pm/common@50 -- $ kill -TERM 98773 00:29:30.026 09:55:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:30.026 09:55:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:30.026 09:55:07 -- pm/common@44 -- $ pid=98775 00:29:30.026 09:55:07 -- pm/common@50 -- $ kill -TERM 98775 00:29:30.026 + [[ -n 5886 ]] 00:29:30.026 + sudo kill 5886 00:29:30.035 [Pipeline] } 00:29:30.055 [Pipeline] // timeout 00:29:30.061 [Pipeline] } 00:29:30.079 [Pipeline] // stage 00:29:30.085 [Pipeline] } 00:29:30.100 [Pipeline] // catchError 00:29:30.108 [Pipeline] stage 00:29:30.110 [Pipeline] { (Stop VM) 00:29:30.122 [Pipeline] sh 00:29:30.402 + vagrant halt 00:29:33.686 ==> default: Halting domain... 00:29:40.259 [Pipeline] sh 00:29:40.607 + vagrant destroy -f 00:29:43.892 ==> default: Removing domain... 00:29:44.160 [Pipeline] sh 00:29:44.439 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:44.448 [Pipeline] } 00:29:44.466 [Pipeline] // stage 00:29:44.472 [Pipeline] } 00:29:44.490 [Pipeline] // dir 00:29:44.496 [Pipeline] } 00:29:44.512 [Pipeline] // wrap 00:29:44.522 [Pipeline] } 00:29:44.537 [Pipeline] // catchError 00:29:44.546 [Pipeline] stage 00:29:44.548 [Pipeline] { (Epilogue) 00:29:44.561 [Pipeline] sh 00:29:44.842 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:50.147 [Pipeline] catchError 00:29:50.150 [Pipeline] { 00:29:50.169 [Pipeline] sh 00:29:50.451 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:50.710 Artifacts sizes are good 00:29:50.719 [Pipeline] } 00:29:50.737 [Pipeline] // catchError 00:29:50.749 [Pipeline] archiveArtifacts 00:29:50.757 Archiving artifacts 00:29:50.900 [Pipeline] cleanWs 00:29:50.912 [WS-CLEANUP] Deleting project workspace... 00:29:50.912 [WS-CLEANUP] Deferred wipeout is used... 00:29:50.918 [WS-CLEANUP] done 00:29:50.919 [Pipeline] } 00:29:50.938 [Pipeline] // stage 00:29:50.944 [Pipeline] } 00:29:50.962 [Pipeline] // node 00:29:50.969 [Pipeline] End of Pipeline 00:29:51.006 Finished: SUCCESS